var/home/core/zuul-output/0000755000175000017500000000000015135771476014545 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135776301015501 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000270162515135776217020301 0ustar corecorewikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs$r.k9GfD ?KYN-俪|-ş" ^^^ӟx7՚jWc-bֳ<ξ>|Ƭ>Uח۬eyϫ7N۫㻯7bz1[/y}U~(+2'rs\mw6鮾f?&~|3_L2_f_ṴHJ2E$(Ͼw7 +]W7t;[V$+Wxi2?<{9<;>'m_VͬkmVN:`SdQQFh k0&S V&@i{ C2i1Gdē $Kٻւ(Ĩ$#TLX h~lx}%n6:SFAWW.%T%2gL[: ԓ$aсdt֫-g[ .`:J ]HmS>v5gCh31 )Kh3i J1hG{aD4iӌçN/e]5o9iF]u54!h/9Y@$9GAOI=2,!N{\00{B"唄(-"Η.U) _.UX-?0haxC}~xr\t2Hgb*t.-|Hp(-J CO==:zR{܃ l&|'oZƧe7ΣԟRxxXԨkJ[8 ";ЗH F=y܅sθ m@%2gqvD]%X&;cɻK0I٘]_}zy tt('VѯTB/fp#W9f%$P[4D2L'1bЛ\\s΍ic-̕4+ާ-wA9ϷՒbw֜}>| TXNrdTs>RDPhإek-*듌D[5ol2v=OoVYTNʹ<ws~^B.Ǔ'AfS'/Eȗ`hmsJU # DuT%ZPt_ďPv`9 C|kRR*- F? ?xtfj>Pwȹl;M@v0If{5C/(uT ݨB.ʣa('OĢkqCK 6Ɍ[wOw0rjQz}.&Xz$AX0-B-lNv*]d1N^\(~5ɑUIU"$`SFKa"j[Hp'yf?ϼηE,4IVkC 5ܫa!뭠vKtzz*7 55E9Fa?Z[80ݞN|{:AОNodMoe:[t.ٗ*AqΝ6 ޮv?Qw):yt٘8c'8Ai؋%\78:ZBBk`E\Ƹ#¿Øp*xYPLSMY 9J}t/A`*t) O ]/* @(*<[ǦX iJ D mD~q2볯Q'Q/L1+iY¥  %T%StB /G=v-}҅"o ']쌕|t/X8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImEL΁?kMPc_Ԝ*΄Bs`[mJ?t 53@հ1=hr}=5t;nt 9:߭R> OkH&Y``:"s ayiBq)u%'4 yܽ y_0 -i̭uJ{KưЖ@+UBj -&JO y@}D=S.€>3T0|9ē7$3Érhx^-$Zøu32EʉD'MS3}t i:Y`cФIX0$P>Qwf8*c4˥Ęk(+,«.c%_~&N%80=1Jgͤ39(&ʤdH0Ζ@.!)}G2p cL1%'4-1a_`[[z㧦lk˭c Ěϕρ_} Uwt `~ߛUIvl.4`P{d056 5w}'9vh;l$>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 @$;T-n,'}6ȴ .#Sq9}5zoX#ZVOy4%-Lq6d b}O$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::d\;ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByX/&KƒRQɾb K[T(Q:IZaP,MI6o |9*ݾjR:g?m@ڤB^dh NS@֚Nn,hU֝cfT :):[gCa?\&Ip_$8!+Up8`U ߵQJ޸8iD WPFn'&&$"THl#d0 %L+`8zOҚƞ`wF~dWCg5o|ΔC1s`u.EkB6ga׬9J2&~V,./ӐoQJ*Dw*^pceYWtɖ9F.[-cʚmD (QMW`zwX~f"U'8%kEq*Lr[^WY *BCCpJhxUpܺDGdlaQ&8#v| (~~y3^Η"To- p ޙ-did˥]5]5᪩QJlyIPEQZȰ<'\b$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b:dw>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDf?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R/{&Ά+4*Iqt~L4Ykja?BHȶb> 8yݪkIf-8>V#ہll/ؽnA(ȱbAj>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$q:w$U>tL)NC*<` `)ĉJآS2 z]gQ)Bی:D`W&jDk\7XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4@'@—>9V*E :lw)e6;KK{s`>3X: P/%d1ؑHͦ4;W\hx锎vgqcU!}xF^jc5?7Ua,X nʬ^Cv'A$ƝKA`d;_/EZ~'*"ȜH*Duƽ˳bKg^raͭ̍*tPu*9bJ_ ;3It+v;3O'CX}k:U{⧘pvzz0V Y3'Dco\:^dnJF7a)AH v_§gbȩ<+S%EasUNfB7™:%GY \LXg3۾4\.?}f kj· dM[CaVۿ$XD'QǛU>UݸoRR?x^TE.1߬VwխmLaF݄",Uy%ífz,/o/Z^]ݖF\\UR7򱺹...m/~q[ /7n!7xB[)9nI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE>-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1|ttc‡-5=VrPhE0Ǐ}Wd|\aD;(;Ha.]1-{s1`HbKV$n}Z+sz'ʀ*E%N3o2c06JZW?V g>ed\)g.C]pj|4逜*@ nBID f"!!*7kS4޷V+8弔*A19`RI/Hй qPq3TY'퀜+/Ĥ'cp2\1: 0mtH,.7>\hSؗ΀ѩ آSNEYdEcaLF&"FhQ|![gIK v~,Jc%+8[dI368fp*CDrc3k.2WM:UbX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦g/OEU:غA>?=CۣPqȅlW11/$f*0@б 2Dݘrt +qrx!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa{W6JܚKױg2ƉL8hME*%o-X1,eWյtw N.`$ Dxu%!HcQw=R"CUT @`6E**IHg~js~`?T]xDCUBQ;ȋ҆[HcR8+y z9dvXoǏt2Tf6H~]y eXb||,?f+Kr-Zd2B  dq<1!ݬtrt~zMݮh6[{4xfǫ'7)O"9挧0J[.*Nñ?>a{>#-PX΢YAUSz"pNxshӪ<9M8?$09,v}o XhXmq3wYV52*XL96Ó^,W/o ,aEpY0~]D*"5nQϻxFc:ƒ %:njvL f=+DEO>68"vU%qvG"q @nU V<F & W|~Ein笮Q]K{TWYU$( ӦbqޤۛYMАͿ]$I "㗶,@Cpw-J44w<3,RM@[ hirDžhϱhI㏂ Bc4O{]4P65:,մM%D䘄}8#{H!K`S0:V{ FXϣ4A|0>iZvze~H;>m3_u疑jwg=`FfIMy^3:+OXfh2qKϸɺaRwG hlm$(Yv(19 _û``59mr[)y06&޵l~\mjb'4m^O%B:qQJ}:MΧ/?yaMŐZ[,1k-Y,Gߎuw@Xjq]@Ӓ7U J;&M&RhhuAywA%:s]<%1X9% ؇ X%n'O,.fr(e_7{Mg44e4Qh#i0V&29NT[oYI_LU(G` VitbrUY0nC+ djZ,*AJL[˰w5&',L1>!6ZCdRJٗ9yʬyܜrrma_71]G377κص5ĭ&Rܵvռ« %W"B z6 9\w"{3Ayd}i2aVBI!n<ؐCOom%{k.Jzbԙ o9zTR6pL`w٣@C"}~Ulb_@4Óڳ$jR{(r:[1\B Q^;53e% ȳ vgiuR05e0dv<6ju+D4+J*q #yaSh!.za[P,xlɫaI`Aan"z:>.SB|E>N5dilSY]K!r"ԡ +Y)kΣvגk,1yË#,s͑/JMO#7eN8EK7AM>&I c~;d0R3>D5 VU5 ɤmnQUģ)%[n--̣\0oY NCK4%8ڠ:]^|bQ%ܖD$ bj+"4 x,`O*Mkmd㵻@̫i&vGTg jKɈd b7IoW*ݪp?i0`DsZ[D,{d(eELboO cEKf81_X(E4d̳F[bc=E#joA +@NN&HƁBK3Xf%•dͽGjxclYt6tkJ8՚575hۺ."m5[>p=>bgD 1HC1[^[ĺkIn26C.DWVGum[Nۣѭ)Y+I mbzk u! ' \ ; vL3 ߶yV\&B+oҐ5 #xޭf9pdU8YsK¯In@Gv#ҹ52߲l#uHfmi cGfW]ҦEQؒ`EP!:]+PnQ10dɊeK\2klIMْIOIte֣xє D}+sb ghteIѥ"z1,mw;O^vL,E?)f*>iKKEU-рv.4^=xg"CAR~KǢ;f :HܹD02g|7wFA!̤ƽM=Q@1p*Xi ;K äPvqc,_ ?O{tI\9cXbbqk̹JeLMoHeIW/u\nE6`PW"m`YsӚ4l,lT5TlOy+L%6buxFӳL$ J+9Cry B7}HUَ*˳8~i2{~1Mf2}o}; ᇌxS ~馸bGbgLvsb\B,B xY/ݙ{n7Wp\g:<3q'w`6`_=ᘭ{׮}}O8Bא. S^ 8>Z L8&콶c6!;=Juzj:/`IMH՟+TN/ۉَKC/T< :`2ɠpT/ҘCח"x5 ]o۸WpOeE6m۴9y^`[$':d'-3$Ӊ:RM"97#w,4Syv[k~=L~ V><|mw[6=hc5w$508pa 6|U}Mk}ZxUeӪ3fr$ Yz/F*Q|Ĺt\MO6^L뤈6B=>e3y\,קʯk?Bi|%$"'Sc@v7Agv";0# b4X_ӎL.;" A{.A="! sʎ݈ö;v4"18&P0p*}t.DdN4OE: d;x3X/b12ѥk# ǡB"^?1=ANXK MxӿY\gcGcȨ#^/F#É1cnp=ʘ`=ḁ8c$A薌yAب^OӌtxYm3֛4qnk`M1npM#6;?YOg""û%,1ޗP$ŀ鏙8uL܊:v5`sW0_c@x3>62Gx;fl771 9&!^Ì<^;|; Kq`ڿ ]~ kb y62H DwN`MH`. 1'up92\ v\ųR8-["/s}H+"EX f򉠗en y7`]9K1}\\guȲf$LXΦ!/ v7o^%d뢗u0ɡHY(V+RL_ՆL3F].KPgjȔQ|-.Y%C"8D*mR *i:Иɲk]ȍ Be8qV0~,K2.kb or8G:p9o~q;Qfٵ50˹ưZ7A;KiJuq&7wV",1y!Uadz\m4R+vkPb̦8_&xu}eSٔ\F. _#g¥YN`6c6$e"rA@_8,&Zu)L="tBţHzs8\c h7)쓟ayfYLEl .Gjk; D~(_.VDh\˯_]G_q>+`^Ϣ(Χs(TZٸ"#DJD"%Ꮁ.ٔ )'2.A)),лVÈM@!LoAQW  gE{8tS^f~71mh5C_BY`FG>QF8;z1/\[!.<MӢ<|/`#?zM/Ћ%]ۤ~N})oz7Tv]r/#\RA &wR8n>NEUëx-l:o~qqvЪ^\fxʦa\$\VTY@Y4H3w ;%IS9n{fyb1푧%t6l-(}#/Hmic&j J:ÓGuϬj|ܧp8%R7aӪqU?ASTL@+utR۲2/txPU8ъVg%`MzP[~rVڸG;`X`4`>ݬFZUᬼM}Ѐ/>P 9bcЪ \P00H!ƹW'<|T5e2](f ir ؼFb6P G,nTiW8&,(4%S|1dBϛB0r݈l5X٦V=kY$(Z_iA`Iȃd|/lmԏ0oks'#[o=b+F sf*n6mlGn!B}cmUTvg" 8JCY1+vʺj躒+0S XƖodnc9Ce7$bP4x_..Sa>/ xӋWbK!)FfɜbP(eE\4Z|+ʪtHUjN$Oa3Q r  MCM[I|)΍Vdaj"5 .f|8iYՏ5% /m{zH f@ǐ}=t{_W ިS4xx_qM7ԾI/aY5u؁~:Mׁ?A6'ׯ7juZD6vjD7~m?KǍU_G-/aB= |3H2fxrmIt2݁4$O^%Un6pvz):㙡'fXu<3zeYYjI 1$tAmg6B6ɥz}D{ړvDn)I~s`TzLwm2CyZfmSR '\{.<4Nm8E;`Q2Z h@u_ +ϳ 4Xe^kb|f$y`U2L6P>-w][׎t#oUg- \vzU]Rq(kK\Lyp_H[+\]Sȓyy G>Dd̍`my^AH\~9ޚ^7szN5t/QR-Mp;WY)IbPG.k5U֋E1irj`بb23qvBR~R]o%ܥMbw˃CCs'uJ@\IQh*e0R]R pe6GODIQϔj=iSp1eC2jXa(GjM<ٓFM/3 iܦ]M:jFd! i!\rRGm|&(hA֙L&C,iݏi]J܃MA1Q1AH “!a[?]ER $!%Eq]%@kjn.%;+Dkc![Zz7(Z}~v]ko[Ir+$:䵘G!i`{)R#n0՛YQwfѣ>/7vC}YnozwkeٳιF`i));:1u_)T^??~ǧέ؏P[9ןuX>tBd_͏7ܫ7-x<)7-o o%?|܃\=W]iM^߿_7:X;+po@Gqwf]ǯn n?o#i\jPnM8708Vio:Ve/p6p z'Eb_Xzm .g?Mه=V\m;,q#ֵueYgc. !K;0bvnVl[n{b?eo w" äwZ%٘nY(`i4!F oaG\W+;o.PxU%Y1mfĬ1 F1;b'eҔƬs-$muj#VYGbeHpDCp4U8*"i#@gnBG3^nHpa(:% Oό4Xk8TZ 75 0X~\ ˫XGƴj\h\Amb)WâB5ChCBPЊ3Q`t䉥$ >'yw0b#h6o5 $`CVZ %6rAY[6!F {ҌW,hlXlPz(F|fTy, KT˨ӷD8)~O}I[0!I5pߘ.X#)xl1Yk$v᥉Mn b{Ȣ_Hp fV%i:=5963eì o;l [/Cc|ٯ| <|Y& L%29OSԮ3Jw; nڸpv&c^U:#聵IKCF| p|p~laŸMH nV=y7%MUI%Z_Q,<AsŚ\ʯInOee}B5Za+CXᇏjo`Uh0 L I#S2q?=6{[͠vvOV]'Z55Y9+j.[4A̒/ؠEFb+0bC|UƚŪy)JD'VTRΊ0hGefXw8# ^j3"h[. dM=ܼ`UiA3fV ._vQXM2PoDժ >Vvcm(a;ɾ[C_`$1$8&8N\.@<,j 7!ujI=t>)oIA=y^Dͫs` HRa)ӦEh\iZcB<){Cp pmnfS bR JN`Ll0iM"9洐?)+ 2\3q e5K;@p^c>e.%Bc"%+`A+ Q!.j ]}Xw\798-c4Wzi+X҂㦕JsBHA^Xr09]:ag5}5M87:EԔP5dZIk-7H:8Ya7iL`w ֓8xJҠc؊;Ӧ; ep[?"i'Bphe .#%InV|S@7y]^eMMRj]]">%p9A4#c@dヹ Swc%rpBDE߶bt@ dI! hq}1~ビ.):ј Ec t;jyDŽ{,Ym7!5K |eٳ#ވ[!%-+қFH:^tw?8N YvyOȱ7eUȜը@guZJٻCrr\YM}.0j|!QYSNR2C%70HtN Nfim2.0!&OpTⰣ Ic;$8rpqm>P($LTi,x W8/PcJX9d FҤ7}D#Hn>ti7C0Q @y~ qæ٬6$p9"Lwcyk/Gf= bU&`dJN>eb_&PY-h6þ;v7\,PEyiW=:5pTlP[;/KE fj^SQLiWE^nQbRn0L O[Vb}fwںkn][,;#5{a߷[Gc1d0=˗L ɔܿu)pGv- j!ܳ5ؑwμaH{5z^Gva?Hp4 2{pAsKZ] yd#RYs<jV&".%8EI%mN[_CXllKuOp~*Yn#B^yB;8+4u[ Ώ{ FUNjj {"Wr:aDH"tZsJ֍IJ8g u28tkҏ<9+f5o7iƌܽ3$$6ŗVLs7G?1{K{UaR/q~p񥄬G>G +%3ǏsVyu /Obc~$13R.Й١ǖz^8zRy20(ː[wͽ#o!TƍFZj9CBs r{Gk:RCS?B(ABʄ"m#iDU *Ս*C,rP=DYڔa-4}ts!Iaw})܄AC$=˻%y Dtww0u(0=g:DL{!$Q7TY*1alKI~|$,!Y]:k^Hiuky>xJV@=!V!`iܐ`E咽oc5Ф@s vNTlrwͭ{C^m6y!ݬrL"ڊS&y{)[MC%S3ORaX\1b~=rmր Z13,3KYC,plKwu1caŬh1 Vhn`1|GKg|#S\,cZVj}[^x;+RmF2Wi/J}-WHP\z}ZXM,O5We#N ^I;@$xAJ~]pa0A\e4I,֊ cڞkb.lz@nρrd[iONpԤQx`[˱juL bXj?ϖ )<0X>-m'h}& 8^A\5g00{v_O]79j˯!m_v9Et`Pgt_r`XQ:]z6,LR7 G6`[û֔ʹ1R߆~)&gבG`{8.Q:%+C^L`D&0`8ɋ-dԑKd&cNZ5dR2e',ď?D},_ V}GZʿ4Iɏ?̮\١+_|bzV{̴ee|ql1KYko_ @Bk.;.|Ѹ^R~.B߯$5?6#0T?a-fTVNa?8hht4 bbeSفȆZ.>:jmQ$QTZ,\:Sˇ*؄Qukn4P0U[0] z~ܳ@L3ס?(6tx LqfϲoA=K20)Ҿchy3?ׇѵD?>V}ozn{ Ns#~e|!8)r?P ~x@d}6>MLx:7 FYuox=3wd2wOOa(+4N|ZhC;~m/3WC_ yM(^G,o` [OǛ#9F,V@\<2q,f%l?oh&pBݮWL hph=\xs.eP}&kэrR;U)PhU%W-5"*%,j*ނ] lȕ9@K>D~fYrs=Ɇl+]+I8Z2tBYu΋Ŗ|4/wJ`@yPN@#ar p./̴K}X,B"W.S=Q[`Z(0T eygtJoSJ*ȍv)87(s0X S[?p#fvqp* i\eƅ@fKw9KɮCHmL\D@Z8?N3. )7L.I:B:xuv}Sqw:-qc,]wB coJ?.E &I7ڶ`v2v cw筨02Wg#*M&W0fXf{rlgha洃RpX<-9:]1{10.;F/(?șRVu?rJp#g9)3VWeZ_7dCɖJS|'osT)t0ia*EK3FFm5@DB? #lR))v>iHԮ#ωhfCbWKAO1=Ȏ}-#ў@z{Xd\mϦƺѩijDO]U? ,m(ɵ@G_EIIU917E%C-*Uӏ\Jͨ>CÜ9;Z+V9Zj]~#* )01k.-꧗O#EQnT:^jͿxSV9wY ,=%4NYM0׆xi=MA'2)H8x\ȷEQⱖu; ggtLJsABo"5,N4g5y]vbyt?0[l4^vTvpq~4ʁ] gvV% "d++RJqp~EB)#v{>ggA(X*U0$*ƘbϹwa!*0q `h2IG:X3vT3v^3hX Q aidBGy䤎$Ng,x8 ǀIB,vd}QZQ9gU'x%]%ןgDj:A_E̔ {TPM4}6>·'sF&ﹷ '&,tٯzi?q3T%I$U\A+B.yo|l]Ⱥ9_D>ֽ{gs ]'BڛbioD!%>m3j.Cڛ5,wH{aqz9MߧynU47+?ɝWs4}^z}U9>xά!@QTnkF+ނ{0g戙ˊIO90jj]̋G"o#h#&,BˡkWO/Nl&ȝ763UEcL"%O?جbRTn۬9R` v9bnNU] ~\'[dh0p}} ] JS jweJ`\xm}[dseN]RrÕE@WŽn#ƶ 6[P^!Z[\^jժZP\Ǩ%Ll\uW>7AsILJO=;F O)ʯn|pi,o\|j94y۱i_]Á?pPs?F ֫!bVZS"RgE6!2yF=fB0§qFz>fi뾹.\$Anj{ޏybs'fڛՄ >o`$v?B?hyǔ7},P5iʟ}3ɫ+g~` w?@Ii~.T͢3 fr= w.`p9)rP?GQ &|n'`jG٪L5u1-&Uu1!$'?-4"l 乇K< LKB RkslhbFT!9wlapPֵp͜Z#$=G+{de쑕=GJHr}hA*kM]. S:pc-@ɽ̥Ι&#8}WBoѠiHx4̓ύM إs#:' aNY&G ,6d4d>`-9V`iKX)t/`rof71@YTH*2`K !eG])G FiP`W9ƫv̨ݏ|jZP+/3PsA%#sd`:s6R̛K^M^[UKޣM+zGJ)IYokЀ\ vCNg4<;O1ӣb|h~~XFuS,[Qm.P{^Ywhyi4y_H.+j`nNb~{וZʀju4 (vJ9'yNi+kqMNJ4rUvǒ]Ds{ a36_]?l M(#rvvQsެR>-`G(4xkCn) FifNH܌TFT2)H*R@ @3si2pp>՘+8騛kS?3M 7s`k!LXRB0벐!ӂymLP3rFofm+*r"uؖh9^=vdGͯ4u 8+\AT_OM7 OI8`**:9o J8R109=PRku݃ZJ3iI& Ft9F lFޚNZ#R)zd$鏑@]/6A+7d_X}X/EinǍݢo ([6w}01`Rs85h6Cj[8޿NT+v\h]\xQI J/5:|wuk]CIj|Ffų/E6(12ɇNx]&q[ULsv(!-e7ec2$RS#6 m?npƉoۤ\6lbGxol C Rzx˸4:: ZnFa[[͸.)0n!Dl^V[[ V}76[LvU2Fm皴\ E;;`D$R$RVcNqKm.eq}A/D庚 bqqfWv)u7(e+p:^bw=%𫖛Y(fq0#%F*a%"rWtB*ZVO7@,Q\Hx2,w`J'LnZϚWa(~pTTp,Gm洞%gZ~B,ߤaM-; q:J2}{`̍QG^k&t_gr|fRM5E-&ZK̃ʨW̱ BḮ>BPkF>UF0!<ɡ!`qvi3O@wϊ̖~׵fWch\Z! FeI/Kg!U|W_ :fY ^[jcq@fuiJ{~Gt`bwM*f:Ru7,>\"z`VkIa܊HE`Af(L#ءtoAM 5YMЂOP_'^fDsQq{Kn#rWuem7:tUĪw ƭ3F0UsamXq]CK1j)٭ q.*< WF(b^Gsj׼>v6)_2m?{QT=)fէ_uT\bZʽ'Z Q<1{N>f'D>QG9?VoY~tyꕿS6Z kwq1-c%(/GxE4jbq$-!>h%-sgJujOxUe['E@xkIK7W^$3žϢ ER>#nl.l~Y'KM؞E](w̌V5%zI] y捘)^fy?tCyD=,<30#R2f7#R;fDHf| bZ2q=wT)C[ՖfBעɨ`^K\xlrz T`~whB+hR7l^Q7qs-|QE~aU3Z۹)Rx(OOz<Ym9c~X J覣~D  N9aFñ Xܨ/>a~> 4b4KГ诅B,k'l_1.wqT ,^)M&2Ԥ>{m`*á,85rb!sr{#BBQ6,zqRPO'>'GH2+VbbJ5)F vm58O: I!?Gx0cLH^rr{ *r8Ri* huAX.QQ鄹݃([.TZwʂ:+'mwr2INn`V :K7@()&ﵭΥНAއYqV xGoYvt|O˥aKίN_+k9ђ,(1/XTd:6<&50R\L3!m|߫{d"|6Gsj,QɌ l*02*WZ"Wzl{@6h [  T,,e(pStPWKK\ - yò>p!nzU]zNsBTU$o^c~5VFI{^ txP?rvܷ~'['xA++5TA1̢e?sɗ2HG@E8s&Y UižvX~ٷiŽF|Ӷf^뵤Y N|lqf s2X&lżp,pe5~ؽp썹Z)O6'1<*1#ֺgLݎMS;-mȇVf툒q*BACsqtr4=.Dݶ3Ѧg%Yh59XeQ0Ǚd-Kd-OE-555a>m-`OhJNpxAe</"6[UAx 0[\ܔd܆HT}: >mSԻ g ^cz9_Yyv1Yaw®O෴35im a.㈿ybqG_{}{e_w7q?aEm>m3_#$-ômZZȫhv^IZ$%xaWUH_ ꀊ#*lm\ꊍBJ4 QKx1NcZaAã9|,r!͗D&-N7gUTUU3Nyۊd= tcMC{1;0lsvA8H `8* 78IZ̸8VO|3d{aO N1ϕcD(ͮ"ʥ lZKoa v7 Xt;L)o_1Oj(>'zNObJ:TtڥIO&v5xQ-sz*##gSCCWvWATsЦ38}q6HVA?(;m_TsMtU2Y d%;ً={Q#!tbO|Y6Γ\ J@M0R6\KZ3%=='=jz#?rA$ A{!4qcT9Q"n \m1wyg0b]X[Szcj VcnOXpRdLHbl`CWTlK3`u56RGoD*|h]+|,*K nCBbMTtOy0f(N+?m3e\+JLYh1ZYX0HLUgOṴ8)tXEaYUNwV IÑt80|,꺯bQ[l19;x20Ge>D4Q"8E ٌ"Euh"u+|,j湦5wtAA@Ir\rr4NE'UL|$ђI'UY <'zD+]ˊ3qRbvőEW 7|RU;%1`lǑKH＀[yJ@^K3a>C3'n>_>+8&a9Qp!zCj B=2J[nXht4pʭaepHHx r#c9GrXȅ:stllc]z(FN"K=~A[$:%*qRO+u_8`BJʱ&^ZIG8SNc<Kc.hi80$ p"6H$V#%3{ǁ*)crA{ |~`+ eD`[#p)O2ʊv.Pc!òoc:t۰LUNUy`# wCm<1O@Tz(NV|ƃO` =2 8Hհɛ;PŤM EGVj|^(`,R{C:5WĹ3&ʬ_/a;L`%ӮE>al:[] &eҫ9<q>7x5X}BZ/I" :G᐀6JrZumss_RU.(D10c5,F!'+D rأ vϧF+}#sUi{LsbN[q^/0>%5""I!bp48܋%La=4ݿ庹u۬Y6/誔9@ z IkEXسJvIvT/.oD% XwdycgNY(9"( +&|) 3e:eZ8y̾d J ^,r<(Z+@iUfvW4\3Ko|1ZSE餎r 'C HħEyk5;ifbʨNWx`RNjݭJ'ؖ ֕P4VCr(heƫϨ"KϘ;z=]dgo]y HLzx+mTtS`Ҵ4機fĭ">癋FJ̈́Ǡ9xҢT`irC5 =.NIqzc#@yJJ1R%XV_ѡIA\bvőEvB5)*=Nex oG]ٺW_=~u=2 X*JW*w /t3T4*` gr7lcCt'-CʡB~,:0 t#> NJuw}Wm_~8@;]UžmzŘ0OC=u(_b4Α:Q}ӏ{\Pd'13xu?A#c#7C$6NUZ@9S@2|s7o$tcbxO(W {L+>\ (/@T*+`,~`UPcQzvȑ{LVu"^Y 17ފRkm\EWQcs#D1_hGFkʋ@װfޫݫ-޷1Bp|lc!wrm~ z3DŐVOIW3[N|ȑSޥ3YR-)գ "ѝM~bSui/d}ᵨԠ.QZ雫r٬x ,-~9Ϳ &&lI:8Tػ^ x1*l4_sc`gт%rfX֮vݙ"T g o]SI*avH4UKC ǝBv2ڤ^nR/|,Y>4zxGFWPju轀IQrT4>v>/|v2 5C+.aИ|bh#\ d7×}#WaV(| ڀ&qTf\:Ţ>h rA" =߭g7p&GFmڜOWz_GƝ$ & ]2GF44E>r^((X=2J6' g E Etiv 6" Et?axC'$ =2 WA}RՀ[uDs%| F^ҝ> [" a[oKpRzd3gX\3NҒm"f7%R)u:d<6l}ǓLӘjw7o(޲I%; N[:Yp>-ﰶ*BQLW0!JMh 0Ix`3I|̆Vsd}~h\l.10V;8h"f 3~|&d&GDsل|*8 W7 +P}>O>;n?ןQ (aSUm僤JSǼNU[avN( VTk~Tک槙J^BUJ^^(bAiKs)]N:TK4b ~Բ@˛Mx:! ׌/!rJTu&5ڇRJ&ݹmC3?Qή{xNp* (*1WۚTKA;=Qc!7- ѦBM@NO49bk Jga;y娯'G88bt7"4}m^'6b ݧrrLdbn쒆4pD]vAkry{S%oBd[r9F(V(+2[aQSvj95vxa]y4=_ajq{u= 7sqr/9'EEr7uBEx`kŵش ֿ޸Lۊ-?;>^4ˋ蚴# o.CYan[8HEеv˦~wV?[A W{9??oډ8|r7?\+D ۘ[k2x悑Ie}n9?ruj4O7e*qgՖmju\qca\.jr<0tH6 |,<׏*ŻN/7>Bsn>b 0hE/??/S`n`׎ -ì[2tV6ϲiQE_& jg7Rwlo<~s_WiϚHLr{rLRTUGcMrU}҂6*j"Aڨh%ؠ11og0?|>NEçEݤlL¸ތg6wɟUhb\t1_0;?OWv+=8fNw#\5XjXG!>wrny SXS; ^0~a,s>VBn'huRA͚uUm`+o7x@[\,>~r|{˸7lHnJo]FQFۤBS ayuWwU2_*EV=2=$K8PhC.3Z\B'{W_4ߒz!oC%:iyFKc[=-#3 3V0i edYH0B0 <?MlS_'/|*'VujĈ6]1J̸c?.rfa > TTqk˽!\$5s a„l*&p52 U\4MTD/&KP66eAQ]pi:Z<'휿[t>Bq ' Q7G~woA= kGD-PD_w>%cbjg;(Rx[>-?[k!q0m ">i>B]4y/4>YfI}]%̩d"2Ae"ziJS2N'87Ti* qshwd^yɌ AZa4!J"e ׎7uVaء?<^KMJMsxAȘ=e+JxX/(uhUsT ͭ:QUt< I]'H|CSia\`&F+PO^7N07YPc9{Si3RfĹ4Qb]@$=mʜ/N17:@%yL5eT+reDp41ÒJSrCd㩝Y{']r5r(֫\=Q\!~E^NL^'E!{AroSm'&t-S\pÊqUFh2wi1t-qć;fxsj%qc21WMyqXB x/#.EKb&ZZ!s Zҝ.U9<ǹ fvr2EﳗTQ#,)!? j.k)0 Q(SQӪ$迟FpY(UZ0%ǓѦZ+ꆁV {~O_svaKWRAOmR }nBAYUN9Oj}'ϣZSߖJw}e#7(ԣU-xYD-o|t* +/[T7O5Vؾ]W˩ #X5 '2^M/%/h\ȹ gK"kcx4T#rɇks١1&n{]u2Ue򫙻];9ͨ~p$ ˉMó$ה*rZJFssESzȖɫ-h PMӂhR*WX}-!U5uA{; ڼIHl`| )m a.SpWw&tLJarOPBͮB(XU2ހpkR0=.>s.pnSܯ˴ؘ᐀mN4o…;trl-2xM'xZdlCSb@s7dTIhQry_4,{ǫpV A_aH.$h  kcx4T#r-\|ǫ>"cr q=Uz{67 "4tʲ:3EeZgdm!lC(Mu[UKBjU8oZoőկQZ^?/B^8ғP-ږ׀!wMjlqͻL":&cl"cr q}U,.o$_._sH/ljy!^k} kh^XYI5ޔ|P*Rw/N9g0P]$iʉj|`%01,Ƒնi{쬓c^a|2>Gckͺ 6#CyOk:\mݸ$8Rwo6u!dATa 簗R6y4wK ֒JtQH"jI$Qxˑ;EH; SɂT^:W2ʵ7 awc>j(J5<^ s8:[bf3FRY1uG3QEW"x3z5C?Ξ%m_q`-(tljri@.3Lt>w<05q[dy<y_'é< OX’O2PP2q|]? 3Q52:8 rAZ"%^z(8ԂX`X7 9ZI# Z?3 6V)kd7ˆ!uXT/xCHX_3$62,awvk8˳ 49 uĨT\. Lhk,d<=)"g<7wS ݢNkfRxMoY!FL%Q߼I SFO1J/_\>gB@'LH/&LY9 Q>0Ƹ\єѳYXt}DUk1rLo園ةJHϐSj u6i>nPk )LBJ8/l@ڀZr"Zc9:.b^6ͤ[TfA7P0'Ld`_xvHm !ziHZR ՙ+q`ݝ!Eέ.*u~]sEgH; ɔ5*L8WDKpoH3Y*MC\"Fq2k=txʂh1r4EՊ~իY<>Ogv?F`,(2J2RSXF. qX5b|7 +akv$2$caՙ v`-vo_E+\9re9*.nxqGW}ep!h<ٗϰ Y`lavp?pP`;@MFn<O#f9(LhiEv7of{wFU}Q(ͧO7-G/v~;gaX<ۖ:LQ +`6nc3A+1r`ۗgFd QZMa{*Pzgo?NCR8Ko @}?>O\ y% +>aft+)fSe~~3Iqj;@uӋ_^AWC"n0>ʙM(OFch]5(6w~΍&~Kvؖ{c܋㋴+.dR̞'}kb#Fn# Dex10BkV(!o%5 ; d29/ Ԧ]W 3% EE1YK(i [6 6iVaxС۠k hh3TngB GGq#jFu? 'h^!W#"0.aTi1˔"yO`^ϸ'3?&ysXS9|>Aes!n=A *:Vxu$tw:^RȫEcrYip #䊑" 74U@\N,uFYa ݪcw26;b$Oh%_{5_ԠYGޛC5ǷͶkâتkq% 1 11ĖcxIJ )`%޳m$Wwn{0#6Ark#hi,q<WM"%QRͫdX*vJ4Lj^*ӾPh+':9>Fp2-LBF% TY+!r) ʄa~p̈=3<{ׇ$uCnV\j#1,x"Or]EYݢ?:%R3]<"rmf^*xJ׿W^;))[;M251R$$|^ $߆(`my$ccYųn3DPj(M\^߼`92OGc;zJwmMX%G:p5]ƸєوTތhmd Fc!FLʉ*WhS3z>KSQ-rY`nAn b[!c-u›He,$Bi%5pޔt=et5G16Ec9ڐ1 GHxLrdXlc0KQݕ@'1 G K/1-50O ~S`8DN 'O=@Gr3KRA23b4&a5TMWY z Cjz`${Bw=ے.k\o OPaJJVjȚt7I Yrݺ A68bԣl/IM {СnWxhqE%¤iD&qUxc)i3O/2EUݨ>V}w[!˲n[ XT-눰FO 7R)*V{xR KQ!*3ҥ 0|R\S]QMSgԐ3dv&CnJ~} HP{כ 2I$L].bZlOam곜Ыς>m='XuB&O(\4& C^=uD&а ]tt| r ]C5@ `t(wM(Bս|5*n~.PɋoiM}C]Qڝo)'QkJ*,*~bF?Ʊ!Nx0;.i9}x*pԾ|:o.§ntƬ5yO6oR, Y+>'1<}cl/;cQEd =s"(Qi`)ٻ[E@'g0cXT:i+--* t0_1+CdVwn=|hC(FS\jDI _a{п]LJ?mfsVFX]لS e2uBI9&?q[&)Mk^w pm 9dA[7$ZgC'|d Aӝn;pOcD ،z۠/=p"Kf, &۵JoІc'xgwOw˟?W$hJՄ,u1߽( ȹ ,0ѥ`#Hau$W>82:һ:D3u=&u+|ä:yZ0#lj~f+48?0PU3{wq/;N^I{qvJؙ;9e@'&c2Fݭ~sfnrF[XOOx 7᭯ۋ}[}4(߯oEIp9~6#w#*TMW?,±ju |6\?_|~żS`sϦ_BqЏ`@Ϡ/w5w̯7K?_տttv_~w_l6?>xn`/^.[QյJlȅ 8T`0Ѐ\!{,QpQ/r6?DECͫh_᫢.|²,(&Ll9OWf|}s&fEmda2[K:'ޯ&NtL&O$ײn>p o>?ceEoԛ=X%+hݪp?>{E[5_zorMz46{%Ə rуwkޚl=_yjy׾|~\M??ǏLW>m @;؛WQ?@&yHzհq{I_zK)0:D~{@ȁt_fkԴ4 $OS('?.V t]UחWg U#zR巓mqAkP]$MLS{Jp?[-\PZ 0@'L ,4%k^7fJ-sm@q|](k5db9/ .W9 Jp" ʼ$L&h|H~= t@`,G2KQ-)/ /IQ2J9`HAv9@G+%wp]1GKʈٝ^/h6sdB$1Z WC'o03L|qMUL[?Y}!gsu+gA+g:I{ڄ{KQ7Q=. RpBpG|Y!,]fbvCռN)0,u8 1}W@:q5L/_ܾFY<@'Yڦ@Os_b9̸ΡJ 1RF Dž)+ņJY.2&{zo->]p4א1er0i$* 4ET\)ThT@.hC十pq-5C<>~8Y͸fIг& U5_ ta2ƐQh f/:#gY[I<]ѻ3݂*YEB"BF`R8Y,Aʴ)Ҵ,|pL)ك60_[{a$ZoꈧSi6%l0Ou3R4uV2ap9"fyu7?;,X{0kaް |P';{뾵iu!x8Ô_KwkADbc==y]>H9\Qsfnu)1J!hag u8vFEb3'KƸ7%YMMOS.`lan] Fak#| #9( ^׎_;n|׎_;n|׎ct3?27r/M=r${4acژl,T}"nVrmZNc!%HoiŤ1ό!]w-F@jL撲2 t׎ŵc1) #ĤG;lWD7FF\arT' KEBVDKVz8˃ѥf?(J tyD<`t1œm*+qs1miI~^*Sɬ-Hdk "t#DT5tŻcj2F٬[ߴHkڀc$d$ߝ?{WHJCO;Ƥ;c~b~,yJQ$dK6IRO)K,d3jQUVq9qtrӅ)36 $}CkLJ?/?OkAćpϟ.^juFm)"mHVa_OQz$l0ipDH|6\lqu;?ڑp>w}O\ƳyTρh>'bnKo;.0+jxSsUc}TMR6d p5 (3?"0@kg/{`2.6ij=-k ή_GJ;SJ yk>=/bA4>%.^oWQh2lx{ŇeGOp}NwtDu oo&P QLL tKkƄ v8^d̟uuɗ'l]s1G ̤P9B1"E܁n|H,&U8xZ/%wR!P2exQ-v nc8^1lu^^MᲿ膇)={cE#:.QO"0ijS#\ klztK~:p=]Pu>kj:ɕI1qk$2<@)xʹ:Tsc#=?w_goxvgd9_sN֍#O !kAj$PQFݙ .> {ƝCOL 559X1*JaMJ$u&ˉw;?\fv?a٨(Kz#hM߉YDJ^^ y>K\qKUE_9~[svb c"yjZwɞwmoո1!eLXό'!\Ϋ7x O'O9 tfiDzyW=۞C2癚l v*T,f 3 VO< xN㛛hQ@4PǴ?ÃIkuVf}f u/* Ml_ã9^>>"J]<cerQ`ki;^N]V6d:N[ȏyf\"3yH8HQQ$+TH[tHO@gM1[l.ÆyzO\+qX`D )t\Kpc[F1s)jƸ|4.Oy~a7^S˸ {+!as,һ#W2#g3Zt4 EcO_']Yk;a:M+8(AY|F·HD%1!G`HIaTN:>u~Uhڪ[uɺ`l>ށQ**9 18LT1p74E@}bAY]jJϽ6xviJFcpx}pr7)G(I&HrY) CkS ad` &-=22`]d`wVo .TtL!!,@"S|POFh m3 {{@FcphJw911"r:0*q!q[h _^`R" )uN8MXh ?ZpMI3R:h2k X-x"-^OxD3'$8*V2S!7d9t5S<.s<[:r%4D"Hd"]HOKm OU:p#Pxc,FY@U^S2[tlW,\>@9=4hU镫 NwtgQzï^٨q6k-1FT+YBÁkTn.1,uuj8(#LB9a#‹]=]j|5O{q[NS.hI첣ݲ.0bm07+^x=Ulp9ZZiP8=%bq)&kgx"Z!abٲ~Dɢ/o ?4m4c6mM;俖t= 18C"׮C GV KCpwcq>Zlʆ.H^Ncx2e\F#'T 9õpi7u%18Bhx$xK`PQp0<$$X`hgƀ5wf:$PppHt #Y01ikV2Slz[UӚ_0xOJ*yN2<"qmQx=}E"z(Uk0ݰ;mLoưѴKa`*> 18~\ 7^̰Crk4p9Q#cEJAuL.e4G;4Xpʀ2=HAWstZh ٤aKiBA_2S voUUԏVUS .NEe[ 7F˜]mG4]j wbIn{sHHUu$-J{tƤل7 sN0=3+7nI6k&mvEH}:JI *Y0qBT7ƥ˜LR@B[ 7FB12T3:m. ~ 9?2z{p\WɘpOotF>(l-6H EV 2!"d2nˋ gopҠAwhHVRp@FcphdhK$[k7Bf2~Ԣ}+@t:48 58^7: r HHVo dm<]Qv \ק.Q|IH,"H0ӤAFcpL3':?)dG`~eq~;2H̿0՚o֢YVQa`8PV19 U~40 V% Ak`&zRe2syA^(2;7h<\LF!Z<g4FB㨧&\1(+ ]|ғ" s@FcpL×ʁ՚mo^Vm7-JR'12Bc`IM/]79q'#+\ɼ31"<`K!gF)8pN 9<`=QOmjOGˋ hϴ\)w{2#E6aLu^1)i j~1Lc!C01D([AB܍d4Y~Wڢ*їLb`,TP)cn Hr!Ek6FB 06;K U[(ʫ8vr!?E[!Uy6U˷|cHN\dNW69ܵ;gs͏RfS||:hl;Pc(8ʭ87@~ {)d<goq9?>?~>hݏp7xuusC->=:эϚsse`1ILˆ1`\&3y_s66oL$xmwD۹]_DL!Z*jm"ǽ/IpjiG3Dːc"I0US4"Boð~j[;DĤp !(\fR RXjӢ\BhXO|H1ͺ',EC2Me_{TՌ^8W+}QJ*ҙҶ7koٛF3?ϞCyV3SgѮza\aΰ܉ 7gvѝ! a1ᗜ:'_ ÿܒ-؟ǦzMt98o@L6/-J$~JY}{s>nU1e67UĹG SJH?N.>GK҅P}hS^~[늎`ۻ4}3]tyxT9[=L0[N;C\dŌ1)$TyK1fN -Υ&>c< {WƑ~IP} 8$5Ӈg#C r8xa زș>( 85k *{e%,;86W ׯ eU9=߽ PǪ)9CuFE'QL,C^\ge$v1 |XY+ʺU-"1x1P"}~zg߆Y0}|(XɸpC}=9uoMMW4BqY0j**aPĎ$6Lc{08"aHozRPԟg,s=wJ o`uE#- .a*E.A{2SYR)8q0LycS.8>T~_Mxjm??4GS4`*Y[TuZx?Fm >nguPٸI5`FGԨy[q}z!W Q, 5š^2;{ ɋz}?<urɫ&(Yr*zLJƫcn 7 h<}uU57*֦jegoS/k}\/ I][0~mDSz~0@(R^x0f,nZ.ъ` +ᕛU[ziSU+[R㗭ʪ,+"qM ~|9G̀B ?oA V*YmB=]fĚh5}bٯ|%ִb~~ze_эMlvUm쁊7|+Z-Ïb*7ƝMxX͇l,*هZ/gmO} ݳ0[2}-{xTJgC~f&+st6wS_!t 3g7ā B )hatЖZD Jz H >PQ *])/?l>+ MzX^(Nԕ ʎю$®Xht Sq#M5(pɃ"x\=շ3o3⭣־:?SFΩ$v=0TQE &E(0ӴKgiW@tvTY#u#b.Fk[e΂~i\9?tHrn~::!||t0/`u~ދ3t^BpRXF U]W/>wX7]nq/XB0uK!:ƌƞ1aN9 )jCT6[n|>A -*I\om@HEt- x2Dd?^QV/xM׿hIyun1FDNC$?)〮 rsIP"0p$rCNkpyc ##>V񬌻*Æ}w?+X_7WqߊYe*3ŇI\ ~ݧ')oB㭓ַoCLې,cwyTruQ໎yG/2%XBщ,%>PVIL@yNeq,N"I8iRY4.&1,TxAPA18K`qk woh @pbKlq "'⾞p-/ADbKX(%[\bKS;%d*[]b1S=KlqKlq-.%[ܝ08))Df))DJ B))DJ BSĄpnbML 71&Ą2'&Ą?1&&ĄpnbML 71&&ĄpnbML 71&&ĄpnbML 71&&ĄpꓩT)i!㩔ͱe W;~ Jώex0]o^Y o #rdo^=&QF9&9V}xgL+&YQKaH;9P pBKBcCpr߹/s,'łnhXoTz ޛai[SڝlfvV(Έ6\"\ pז\rz io YjN.$gRz8zO1 N~?+{2 * +^(L 5"*,[TT BcwkbT[wd/3<}ҫR@+{ѧ7.mLc[i\_=$g^K L\R/Vܓ=zreh2Vm^q3;^:7 xl,*tm#`8lV+m/,D[Ha :kSfJm֞I%QrkpU"ηմ4hnGՃA^MѢkÞc[KOQ^+Ʀf:}c}[qqqܛw{<{q}pJ^1:7#E\+bf+2}DM6*bf<8 !(BISf'U=\E9SW:vB=`ƛ8ۊ]%o$i7GrsH))EO_<^=((b2C€i^2쨳 qvvB ӍrHﺈym˵VÿcqkX槭'H( a;E_ +d">^Ozh`e rk*A{TsER]={heׄUh"o}PVBYn.,u5CDBZfl]#%l9~>`.p'Z]*j߹@Z׳Ȧ4(,Pac䰋 SDG$I ,)rCL ̙w`6tl 6q2zXt~q5F/ ږ[eYƹܔw{'oLz{HsW}֛rp7ْ=ڝb7+moɿтiu5ތ׫is*y5#D_nи| ab9WxE<"  R6H! $ /+^۝Χࠌ xsq)(ڻ;>Lg(@%?z c![="1/9&`ÒւHcCߓQ567Xm ƫ]hԴvbUPkz$s i3(H+ZS Jx"!Bq ( :{HjcpReQ x-ӭ zZ:AYAD~y䗥L|I]tʯ?8&{mԿFFM!c0T $r))H/SS!($Mn'.ayq{@)1 u>ޔ$O? tp:Ehz9*W)b'Q,Y8?](~?eGH͒'XU5'B3eO33t{y/W. q#?ʉ(cUהƜkbZi<*5:׏%].K,> \"?{䶭/'s2U~p&&U9[qN X#ɒf|IIDIH7 EFF?)!cȏ#wWCP0@63%CלRr}k}񨼴Tm@'k6{FjJH0o$D8_v۽~AD a bz<IswUQ{!ݽކdf NЩW᝷s|w?~~?",oo}k4|F_w΍W¯u@сM3TӾiD:M&G]N!iv=Tv_SZfu?~lB0g~qFK^ӯn{h0mWTJg*znỏ\3HO=ڛ⃛r@q[e0;sQfv(3wSsktpZe!ِbM~Kz^t bL+fZGAOT :'x)m=1*|)OՓ`ޅCxcoҙ\.d&AZDby+N*{ٿÙ,dУV,W8-ԬORsfafzk 06Ry`?)C +~5} ~3UJZʎ[`J[L(&. :V@zWLjZvU ˽sb_kOU9kUξhV3C:Pr*c$j&8$T+dIRZC:J ( xjzYwׂLkuh߻!ฦ3+U|PHPbXo9[NSo9[N1//#p xv8BW v7&ػY05PJI \σ[9c}L 1>&`O4Ic}L 1>&`cA=@A!z2IPBH@ԁ@ʩ]j0"»w4cu@/DJ I*T"I$JJ 5NTb$f,Nu5荒HHHK*CZ remwP Hր#U,`kU:. վ@Z7Ȧs&JyaB9SENI!_B(`8n4L3d#'PgHPGIS*)JƄJ 1$QHcc~mhpZȎ'vT@țq=NS?C3㜧UC3ώ/)س=YJq*Y"#*6e1&I4;kmd.P1,{UGk18 ]T۸9FA%km3 Z0dɀM\dP@b JfA8npjlcB IZ iMœĩSY $D)ʃIp0#-K,$WcL Y*_LU;oGf =48LʸhsHr2w`SJ ѰHMhPSMO S"H#ѫCP}K_){gvƕFHu$HEn1uqpNy>k@yQ`) & n`BUP|T#9EC Qڦ1rqGw`hP8$‚ HjOhgg،gcX!!H7vw]a4%G Uz& )-]G>?PJaJ 1^>}kLYȖi0"+7LГ hNPS`S0ZzGOם\eJn?ݏŒvPɎ~c/ Us#`@QB " $ɺw׵yt|2Lt嵩WlY.*v5(垉EemԑG'{Qniѽw+OxU T$Eu<;],VI&}EFk2NV꺳߬}t).l*[hl- k_v;+Ioz=)wtݼ wIGQwmWwC/n(#h5WVbMڛ}Jhu^݋Nxl:~gɗUbB^( xOB uUܯkV|ߩ{+c0|?x\?zSYI;XdiRoCL 9Vg.XZJrTcs@M6w~S[KzHh/]`!>"F(%z=й˒ǞxB7ٞF  ,.W/l:0ȸw5t)EvM&_p]s]/@!\C)q!d{QM>{JxJպ>W{ st?w$FwhW#- g0 z̟/{ @(+We^m(Xk0sk3L-Oy=tSluzjZnӡ(M `5€jɀg`$a WOkٔjygV29a89 SGհ/>ɋ~xo)ԀHj<BF*ƩV`K32Rib4UcbکK^)jJypT &ĊjmSد{ zIK} eoztt G4+',vbnM1 Pnr`MX_4=uZC֜ ӷPU@@Yl P8$Y>-ڴuUPpk1 g&d6L#lE(م#Y9lm?z$ $^BS0TJ`JXT,\|J6PTl,,!  O\6.rk`ٮklKfĖ9Wu\ܴw^ܙ"L fM BFQoG n6˥~bߘgte-~K GI7vGx'?'KGKAv%ם?Ҏ4.{ɝa3Z:h&. - ^:Q@oa#Lx7ӿ4!)4zH5,kWvp.2M6x<0l|GVOc֠93 uՔMJ$!m7نd!jI),F3qL)HxĆi|5WR~b!n>&B\ +o̽6Gz#4]y٬NSc~hJ!$ X/!SH5 S6cE5 S=W:m^\~s'a=P-8gb8^|Q)Xƹ=d2в~@ICm${dA Ԯ>m̌*ciZP9-SJ9$$2?0}@n%4s6 #:ܖm9̚8`L1pN7m]>*bLe^UyxX3Y-#n)yM%m!}g};F$J2RbT$J7 rsjgٝ|&isI1[dKʠ%D/J-ֆo` PZ"|{dfKٻVFnI*GRpRUʑ8Z ):."$EE})wWDYeYvtUa:/еX*" -&8k/x0)WїI`:F,Rj5 plEޛPS"*vJa"Baq!g k9A:MRaLQ6&TbT* B>X;M!i 4֝7&O"nO1I$3xgbf{?_}t5c k ckJ v1ŵ%ixJMsޥ&L hwPC)1 k-)];WU1PUڗP͑7\R}ֈ==c{^.vhդijXJw) n`BU\FyY7hH!LLC Qڦ1B(.CSf04(AJJiNa$0aeE;l4 p 2 P^̄]#ѕV'8Muu.vJoD` k4H ̜ڳl jGφ~U._"H\ WE M;)27pqξ_S ;𝕔 .%9s,Z$]TTJA(W{"W{՞tī=jOT@*`14&ƄӘpNciL8 14&ƄӸ9 $ qWNciL8 14&ƄzQ9!&ƄӘpNciL8 14&~]R@(Zb<2ydjV~5~zl@egJD5)N o%UAV DkEDD1:MsZ^ƹ*z ةkQίiW6 0q Nu3~)(6,^K/>C%Xo~v2DxJitkD+QvlXqɛVD4뫒V4,94 zy<Dehjy"f [Qܫ█6BOzzIw6j"z <;Ov6.]>\P7f|ݏ'ٻ,UЮhXx&@& ;FNbY>ᆵHJbuP-ulwT5-,pkwh3.5z9= JcنS/_])rӱ{f bO֖1nHM^8mDּ W IF/S:΢nFSƺ8Uɣ]`)GӮUܘt%ydK]o\ubF|2$4ژ-׈ֹwru&GMUDtاl%OнgP}oߵR.e|eu)]IL_]zxn;Ϝ6l"@QşhCmW>.QSd!N)6~.R'S 嘻!^eC]~RVǽ-"AW a߈>Ź,,%(\\ؖ)}3YnHIs!&RQq.eg?oZtzvC/X,3ҫ KNiۋܟr|FH/9~u|%﯎﯎rRWʭ:X]1O rH# be}0z)#"b1h#2&"+K}̃~;|8%D^ڴ:[D7ؗѪV+ȶANcF1Zl4wFm6*GT+%! j C _+cΜЫ|vPNj [Q+0h:8ߵ蒵sZ]`qo[}*V=HzLH.L5Q# r@bDsMVMw]:c|GO'‡T^ao0dҩu[lg5M, k%H;G(tX|z=߰-Z4w|M=~xIm FfNO8ϗC!h0Ft}jٯӐc6MS)dnp{&ٷ}~7טW}uf!yAV܈gLJ͊geEl~>lP1\QiXjyzk?itXYRKqǟ7/|y7>xUk-Du9޲a ~+cWwɳ0@cf.l{LV6fqf(|=]<ʒ }1 ܙBj]]4VHy&):]Z|*Wi\u%;ѥG~ 1~׌FF?%+`fIƽKPn8\XXI\a4aT$st#-E?~X-#$Rr.0 n7y_|uAQj)wPHZEn81f%55V{LGSk֥<ǝۦ7$8Pq?bR/"3F,@4!`QKXQ*N C ^iGjd~r`x1[,l8I?mDFףZeRj| a6#ښObk6dNأ##]Q]^x9#|tJ @(do*bO\FʵR8?0F@ (WaZ,5i#4:̈| Dcp`8u{g%ŷ<$yA}u0E&,+Ȇl8Ngn({ICHze \HAf֗{hi3-v=ˢv̿c'_Us=hPHP&ƀ%*,3L)xHAG`LE7#ň/w>zP a%Yg@t< sZGY\qXt/޵Zy LqY̙C:<NcʠzzQ!a=^mgJUM#(wp)B`)jg>OLj9+j$@Շw(uwsNn9<>>K) :h))3hqaЮѵB=o݅)% &E D@ԫg5R,,qtNe)=Hj$ϧZyE++7s+Åg.>!D!e!x0k5f,`ZF hFs+%f!qٖͪ[0$5AZX,^vGEgO> IKB9UaaR"ڠj0KqfáRJ;;{znkvjճ-%cV{mck?Mωy{Wx %ZB))&[-oW12#XP1g&kzj]k! sp/SWe٭_ OtˀBPE#K|4 j-$* %^F@HHNWE"rR!V ^Z8,r!c0)fe } е!E=rl4xby 10BB23uS/N7cIr,aK mei=%5u% g~}[Bjtɻuk!ů:݇$l}Er[77m&ZMWy! m:vCq]R~k&兕[NWg0 byIsxz;/<,h;;j ,7sxsDi}ݍW'\{s;=Rۛ?ݞsoFǞU;-cw]+.9զWiov~4*}%n noamYbin.aߴ`{ sZD) V95(¿V8dYX!g&W|J1s3)Pʍ"6G#";X+\$JKFҟ<*ȍ:|&n` ݜ]4sHѭW=;#л Eq65#oǟQEgO\Bd&Ox>?8mO*0LW2P 8dJkK]Ǩ}fyuWUK-2@`3į8tMC'W&Y wxqdl%ϴFRb[bdǻ&Bp{8I|lqq'9=6Hʽ'(%D{\x "Kl1`=7sٰ>ׄ{Rدzu+k 򞘇peȖmT*.T$ƩhMNƉ!O puW( tSXmknC #vTmdN:A:dp8 cÈĸ#9zCk\Q cBFQ+d dgk:TFC,tRW&Ƿ촼ME1fOj;)Fu>bn`22yV.o'̈jO|7 Q@˙'`"sPSʹ7 {"LµT|>P?X㙎K*Ljd<)V2""&ZPw#2&"Zr/pLv2XupDw0O[ycwjcOiğ:[eA'#BleF͝QFGR :a2 =%TMŶΖ\pL {ɑ&Z}2nJt%u~%nmb^zi0H Qh*CTRp# TYE}1w̐nC;:mrh69DCr/gayQ>{O`ש0 6¬J~D_-VyCͅS-j*Nj/T~T / tz_E*Sk4_>\Ŵ+ {Y,܌9ݕϲA~Y DG BA" ~h/9v3|%gS4aDL'G9z9=vK+iIIÃvVql;S]nRAD0dui;-(l+0乻|{S$Ub6zs&`{JsL@:ñIYڧϛSPw6:8D8;SR|δ0 nrfə:ޮ8RY wHL @*xR@-6yPA\`Pmv-')#ɇ踤0a= *iQk[hc - %ظ2Gjk*;dvY Zhc0wJ%8/F D LLxr ^k[hehg*al \(מϢȷOI mL^޶);R jPwqH e`[d/go_.FIv{4㑬Q4iS]]zt^`W`\~{z<P*f߅'#6?BnJ&Yk,5V^ʌ;wS(t^[xūkY،L ;mt<" =g+6L[p-6:UIBkHGGV)z.%CBA20X S0@E)zoۼ>(B쪓 "Rac*LvP!si(>aƥjQ\%D" jD zFao#$(2Y:R2F@cos :i%HF~oH!{QK ި8F~-{97Rj887ujQcv .cqN[ RjMqX`%fm6:UpP a{ o)Qq#-YDY6P)z݅@a$=)%/،m94&8&Ped9,h]S*WƄCx "%Ef&1cUKK+m0f(:[Ÿ\Ry"tKBhd l;p[]_5^yOECp#j+ 6=ZX}=$^?B&+)2D^ ZrQIo WbR -R&+s2RF ]!|$+hnT[kB6Aj (* R!D}./ݪss%;l~"mdiő?bRmNښU &vr$HlARu9Jg#{DGvW@vI4U\A@iɣƫ jeXF6Xm$h{Yrl =<^m$;V㭯ϲjmJxa%?y'Vj'ߝ i!4J>C_}Oo3^q!1'kKUX[-:)Sx4ۥv1|dff߼dx|ZcHƄ@ȁ}o $j)CWi H> =,hvv]ˇ@ѡ-@s Dmd$DN< .9 iW֮u}xȧ8pNCxt3>pݫdA SJ*[$lcOCx4/([:6MF2) ocdAvP!W: R`6prF:ߪ1XFl~ £a*m1+ fLDRkGlC1o? =>2'<~ve]Y<y>qŶ*%#R\9$p )֎Q4)QI&^wMJru[)E7C6)֖6 ;T0@-Ayϻ; m}FQnm䆙 w[?O}-nx4Nz@1϶g|$3I 1KTl?1"E5',rE θ]!]Ri7t? z@T>TJVl+4+z0̡"ѫ'm4nI0wW{5вS2:EC9蒒vp8K 4@Eɯ;(7+O?F`{Pi! ZŒ}Bpbs&ɰvѕ\fRQPPsӚkV3 !mk JeM}u6i~Fyv7_~sߴ|q~7/AÃ? Bnm'W:/ b0I&`n  |tmlp6ؚtycBIjkOBZ"h|X{mwxhlInFFJt!^Gn)2Zg :[*qI]0~xߪIݬ<"K齶[xyv@UI"HC Yg]T8HI^SAZS`S)cC#YU1OqWqy~pϣ(PX<'M؈SASZmMEa(/NwT߭~#t91/cr$@Y/Gg:2?2p-&tŸK0fVeFвMf$l)gZG:$ 颭m@+`d)zo@N6l捭VoPXIB1 ?BQN-OV"5 [#7S(z;69 *"ب5x`dJ+/՘FS(K E/NJdr@'X˸)z !KCTT EM){L,X)ݳSQ:FYku+ I9WA*hmS(o j S$#4S(>D)fJAj<{P9S(tKz _P(p)٪*POCYxj^BHoeZ~>"/E =*.Z4 (_+a,N8#ŃS(t>@o5P4B(s+%LCxc7% 5b'DǒxY[Jyk?BM|dQQm2Or;/f˓]NYϾ͋q^*m6ooOx'<;.gpўnA|yt~zq?RJ?,&vw_msK, -r}-Nߟ?E c{'N,Gl8?NebV:ϥ\Nɒ2pYp$uVh_zJ>rU;u|,^>2M[/Zw2:)(aY bT)Q}T;Fc<1F6+UB"Q ئ18Fjݮc^쌟5 ev(٥Qeη$/m8/S1/o&Ԝ2rX |Ǐ."_SVkb2$l_6vH{J myug;ÀTmj+BQ;lz%l|6)2o \W;5-ABh)JUE+OAj'B\"-cН;ɥ@ViW(*$ "pFdr0Y 2[ $ٛ(5ժ#+ zJio=MݟE/ cMBdU^X Hh^b[򔑊 JCP+}t-V {H% H^:29fK)Xl/^Ҳ_8p<ʃn:'<67rEec3jgK4*I2 bQV 2LŬ4[[լc7~gO}w6U PUgy2splɌh=Ĥ964[0uBMЋXs<~Nn&ėqyɟoo{= ]?> ! k_)%c6K Q$gxW鰏;(W*)ӺEO9U,YE)o,Y )F=]L*>@"Tr 8.] 61&ӂ]7vơ0% NM}} >'lq1wJv=YrQ޿7YrvC%:=:z$tjf񓡣r_#f_e-^$^ӹ#WPZ'abhR67֒1GF 顷 <-FEZ>R]k)0(IMo(ZgV_?2fmk^C'sy {H_fWyX׷NZ``VӫY'n|tg8ľ>W? ϋ;ϏMmڭ;w('UϬͿ~|o_<&wy3EWq1jݛg/*Eap/<ICٳjߞO 'Af ͨD-y΢eNKhKFРwECD[0(7A rKυ"iri W +4$!: 'R ,D$v.\R,BhIic,mpZr+nBvPaӌֲ #5L臼)*e1Q 愖 KB)},\NѾ >nTc';F_2q a𕢝k3*3RvH-eոf-y`#Z5U !9Gٗ:$NVh 14N AR(`Ja+TC']kJlDyn|Rp'zX32X_PӠ?t{@p3|, \*/IEMb`"|L+`v@e%!N SQR@(yB$^ȴks2H IFemvX'cOHƌHPNM AKY\p^ TG8@q(Ն58-;I$]JIK"!BxYnQke;zA'p:}B/8d+:i#ub#+lsUfw{9>l'z}sms#kl3#ԩ}&^ WmȊks9)l"voo۫htIWf =N]Z}Ƨk{oϟ_^2}~._zS쿇{nsP@^4Ci-p뿿] V_5װN׬YO|~)7n_M[g?n>7_ G3{\gB~qba+8t5,bvwyp#lT ы?:ytTҥ}?UoPfr1<4]"?[)is_qlf P~ؾ[1{.?l1C޿?fSGB}~˗΁_qZ⨳ݗTzr"niBouRb ȗ΍8?~\oNw#PۛW#jSlM'Gy<>MKGfv1Ǐ-.WgňvogVPGp9gcsˡW907y3M;դWZJwԛ-8zd55ź>!'P_o8jV[5@'F5^m~d9_qmѨhb$F %ۍ&=ҥHqG)9cpp@Hڙv@(Ec yMvEhpJCQA i$(a2YI Mi0QT+GQGU5x"5s uc&㭖`.0QWLOVru691 C`)<%Uƪ]ղ1rׁ*{|E08ZAn\,s`&W҈)NmS^jrXBLNjqQ;CaLm'Smε552c0ʡ-~‘7sVD*]*e<4jOt:!fg^slp*;,w&ŴH1I0j#[3&-eƀC++Q, "EF O8zFE}4YdmadbM#Ր*hRp=KBIǨ.rb oYZΚrvt^NJY)XXKlB0J0ċh"AmˆJ.%nsfD)iѮ6ESF>}hOdSȃg4RꥠH\ )R7~)&8bupt!uPE'NQA3:hdPEAP6sFEAK)%SnxqTe9,kx5ͼ%y$ k+H$NC IyU08SEZc nY5kBU-F1GDUԖ(*( F${lPN<k*7thC+e0_f[JQ2˻PJ Nꀄ.@&\򹣈{2WiugU`hjs7ʹ^Py5Rx{S/_s܋Cdo~D^٫nq®QT^e (lJomie@Sݛ:M`s^d+:PFGLL;z9i:3rnqJun>iyd֎HR6}Qtni{4]L>VOLO@ŞzSx `nO߻6-ZISe>;؅M>ije]er ;O ;М6~TfTqq3O&BۧLIa4|u29n=]Jd%:yԋ ]3BIj!WVX " -ZDŻKSla.Q2V'㊮KpF1B@rd:z'f!*`HA(Cռk:k{!&ji.*`wItOZ#U!2/np\ܐbWmD>XGqҊUbР`UGejjcc=1*.@S!vBߞqgyamB=$Rp]HH!cS @uhdԜ jEuViz~KikemSymWaxm +wnP %R[$/66v3b/d8M1?F2׻m\r.ph"}E_W wP'.gͲ-75>|5vac0[ʌnTt{m);뾲ցMvP`WX >*7aH)y)B*S84sSQwIu򀫝)ֲ$at\`R b; D锨ăC )tXujvz5;ٍ. 0XtzzZ:Ի_ 2@bqqaYsF9 *v4'`j1:hl+ov0PWX^|THG9nݎFzsss+_/%Yq_zpn6\+n&ܚP1HL451sD DS5dN TloI{p7љ$Q.u9)[96 #F:*_Q$wAo!Z橏"{ߨt"8mq;iE6RVlHT)6[_Ol]kYؒ8295YZs$;{!qPG[@cƠSbTrKP [:ZSf#sd\Cay8O/~BaOx5O#Fs/p'!(fJ6`d1t{A}BT;-( LEeJػ6cWf~?쇋E .XH쵃zR|:9lTש:]]h&$,+ c}9`AaX ^ƴ ɀ(QHLd$& YobA% 9Ij$f.e4؞E˂.vi$P{[̏&Ug_UEOWW?/tx5ǛlGexGn\yX&сN^?soG&܏'}7@Ӫj.yiX]X_3('_:>+"p"$#2ԷV?'>wULm'',U^ոA*cxq 5OF1E&0}GZo{WKޚë6#z!Uxd!R&R/5eтFQ4`pHζ5_W_TojfGYSF}ym{V|iز ,Rg2DO"k/%%DJ,s$!fɁ @ٿl$t|rtS`Yj;Zv>P{gh;YYy>`nE{pusf`P0( Cty1Q+IM=ɥ}{w~o%L/.z] Ƈw]Bx4B1RRZ ܂/O.7Ye}s7swe} S5=L\8L0f fhģY[E@JK*1c}ru?4˂kT cSY6)fmwg珡09xzQ:'{_>ao%L{>e4s< [5wN|n~~ ӟǎ[z:i1%ˠ+jTOԞ;h]v+-(TI:('̳P5L" uۿm"D(PJ!* ,% ?},N,Nk/? 8dM.cBJxtkf@;Rhdm  G79;/B6qGwP ]Kײ2[w\_|E7!-"1p1cK QHWs+AD#",a7f"$=ijJd󒗫G %?S/ f&7,iϴ2KTl03D5^rÜQV \Je%YKT:Kz7Lr<5P1G EέRȰ, Ƹ#aREb?D h$emE-sy&ߌ(?囸oѾ[v蝋i/dW2;̣ę*ZjyR)*L呷AR}` vlXqS{Q]qop LE僻h;X#aLZ K;gqZEE/ͿbX6~HzL7 <XpTlb=}7~4$DFHb)+lr+WA8w"|5Uo^XGL #z!Uxd!R&R/5eDDL ` Xy$RDs>lRmZexWaq~MyFuh5Z`(3*h2:ڀmT (C&1Wh_VxേJ^e!fA (Ho i_c0{ s!j3a C)ܺN1[h2Y_{h<\{KI13P[ԫ4n~:3zj4O%,pf)$YFX,P$IECXyTR"w̬HIi~OEݢ?n{?Љ})S7'=xK*(7_Z?˷Cn넥$_{^dQ}56 sc[_M{_~Oh6֟=$oq{!$j[H)OSZb:tƽ_.w% wK]"\fH 3VkUBc0;!=5SkA@״b&8E(TJI2F%qǭ!p0g {\F->"+88NBB}S} FdTyÌP-hi#4\LF{I1 Q@hg)ƅ3A3n)|а Mco#8%$Y/U6j*TYIR/! Q6&$0 Oōn6Ⱦ}e ?_Xmʖ_w5?|P1 )"%>v FB4"-fHe0X"1;tt8ԢVr{E?7TTIBDcN!NPBϐDzb&!T2(4-2#fE@,"!£3Xeu`@}.s_09KN?i\)/>+3|zЉv{%MG Y9/w8(0QaG/)8Eh3JBfC%!gApٍc5Qr*x ED!%GM XomP4$aPHMTR#*#1ͩBJrO4͌lg5Պ:;a ` l}H˛ B(p-9p: &hc^hdxJX- ϯE>GS(#8`0D!bb@$(~TD:|J#WQXVFf'8iυ\]80EdH@< .L" +٬՜ 8X_Rƒ pd10{h6n%{2D6 w:6rzݏ]=x;gm4q&ba"Ko(0 E}p`Q:BemVD& =eSOe߲n@ xSN019cH/`%nu6ʃFCC>4[Aqg<';|do}mG>o?~I.)~է[?͵YO P9 B=S5SCwD$m: %׾SY+$b.8/&82Fs43bL# l(Oky7fyf^JOwY!<1ڊx'3]3#YzGH5o}ڶݘ4ΦAwy*9R(zXKFB;D 3Jh- ^He /(5wnoңn'k^)OYUC@*K1Cy]qPJAV4d'hz[" D3 NJxOۍWJ2mL*GȝVc !V^@4Y6j1fd:$ dX*厡]bDܫG"M`JBg/rͱj=qRZ^Bǎ#GҦx88j@bs^;] 5U7}Es J9=0Du% Iʚ !ѬT08%Yf-FC0Í6Td޻"^gg#7Mh\;#uvr'KOZuߠ*z'z-rVu@J!!.` TvLex$%NО$R>%$c)R2x "Hn,sOϼ ~ۑ4%uS/kdU+By7F*O*V};|=aOs"?Ϗ\O*~D&K"ɥp p๯oJ1 ~縲c2XZΥWAiYKYڅNZT=}FɍTYB̆>d\gƝ^rVHʧ9$H'O>59,Wޟ mVDlS8OO?RWPl[ TTs͕LI˒b!e9i"ƪS9ppy[p*z]B ù­ģPHk7F~ ފ^g+_O`oǽU)2vEoYȲDRկ98Z2j}p]$ D,EH-$.:uA |AKn6Eˠx8 e7^ٍoYRk櫰DVҶ5"ASF(.wNܚ ,f}*;Z)-%r+:=2Mg:W{:Ȭ0w2ΘB|Itju,Ah$ ǤÄw>c d.ǡ|`sv{N] 5Ik1B^V<g񄷒㤂o+^n9hg9TVggK=Kf$q9~qm9|^ q5JD H̜?f\5睟.?_⮧'s(gGXVGnŃRC0 oKI'α"K8Bh9p=T BP7u s4p zPp *c99djm3ʸdȜ^J"Ap)n1X}%v41/ퟺ5Y;jÄ[T)Ivy/U1x2( cI3)'i$-da!$/:AvZJ1mm6d-(FhL"I5(*`_:,<6 {Bn컏N&zm 3:pd,˒VJ`J N't'aQUSC)E&OôϦ Qd;@ :PqϣKFŰ9!xM/Hu<Õ:\mu]M}) yp5o %Lǟ}W^ݯ?oۿ}ePb)6us_Icl0ƻ!9 ?=6.ouui^{ia!5z\Wּ~]˜[\AsWqO++!~ӷi,J߶Y< !Uq ( R$;'Xܝm:VpJ/G/Q,14.;i6f䷷77.Cӭ[Zw3zgW F>^´x6 hz%wb6;E_j6w Bokq+hPT7_qx'ѿy3;0mBp޼O8;PJl{lқ7b9m euiFg+~w?ߔ=GrÛAgkgCcI!}1F7yJ;eӏq[|۬3iwԐcDM'0w:G|;J]ʙ\G;.[;-4f8y^Gn#55mź=l!7o9 \w]c2pT+_M:BE:]?q<;R?9ݦ2Rn7좈!9 ͥҬ7GU҈s A|.7v:,wBR(!q' R!Te,*GQ]!SaNu_H5XqЃ#l̦KA쁭s u<uv%j 죮NueP]("Q&Wa8o-h֓V<Rvi;}PC.& r"8J-DYzo,Ġ)-&A{V*@Y$`{a-ëȐ(,Fr-i4I(c:9A qR5r(ʤ';JyąG;˵Uvc8B|{cNJ9:JWbK,H3npC>%keJhz:)eRR{|NKizddU1[i\CY%_ϡ}`V<0Lm>E,aj4FW4柨4Wq]zgQ;pv 8Ghp89VJႿ\Ym-)l{uж~{tRRL@2޺Q@E&"gVʙi鷍>NϹ;qJ8_\% эyRQm<@0vD,Hɝ2DE%CDd+`HY{WZ]\sA:LP2H`bւ.+{8<*CgG<ШʎZE)+&y."ӞIM4 UD;%h WI)3*^rI''`fV3Lb9QxY@% Qz0,u~$c{0\p(CFf \3hnGge%uA9E=TڈZjIeh*'SJA9\98M859,<2vT9["|>`>5)e Z1|R307~9R-Dm)P"Ԋ7Ns f 7uC ɩD[YBpJzTIKĒQe8jDMiQRB3Nx%$XC` "e6 z"|Xg+ _[@*шcQmb<$ZSc>ũfJ M&0ţx,"8*KӢX4ٜfȢ HjD ?i`|&n?r8Bȉ P 8* "]t;F]رj\E$6k&u%a+ ::܄w4h%ŝIGN*h XVUFc@ Bz&%0iJΠ9x?n4Ԡ2F΁f{9z= f)ҁfnB8T2mZG'pO?YA^Yjvs-|fNLFNf\ uOfrzo `-N}e4اց!p.Oky7f\Wi;0w~(f-dh>6|Gnֺt;k=Ըthy\]g|U)|w(v'ͅ3xIpЦ6賷jD R(#齾`« *04\%lo¿Mzun9$z(g! WFkKuIJL23S2dL+)]N}mM{OuwGxOWwY>'+K|ٖs|=;SZwcf_ٻ6$W:2V̶X=ȒfLeZF$F @Aj4@EpPjE<ʼ4T$ǁvː);E*I4)Xڵ3@W7n2񀻦S \ @-qA堆-\fc'"ޮ!J,>59fvRKViü*'|:f:<.ۨRlMGS:zNNHXl+L%0 -, Q;7 qfAn$troD. "/udr󻪧OL"OgS<ߦӷXĴeB}PG h=Gm W12#XP1g&p9,b[Pf_)Z 7/k #/^рtN5lu@ OP fh/#`{0G"rRs{Z+IXBg 6S88$ rlm*>r` !ab2ɘV!!: $H[XA`Ir,aKI&9dlIVZIۧ=k ۃPdF+9O cM7G?p9v֖܋QK-9+R 3ͽ:j?d iִ)kShmT3*?wY%TGT f^«J.J --% } m̘#NEPJM46 g@6  G,Rg2DO4"_"i*)"Ŝ(aH|lNNern]9MwWr;'?}4q|yï7(8ms- [MF ȥ\Ig W#8Q!+9haQBaDŽjv  V'XDYJqTQ( Ȣ`;X+\$JK(9Ѽ:%eSZ9μ٠يr\ճKHh?` }2B6> (:]XZw3@̣ę2ZjxR**L鑷AR}`-iUOJ4BKYI" Q6^c  (0  ]h诡VxmEԪ`{b^[9֣(cBALN~e&dޔr-og%+e •2 xrN𳅟-<۫m-Nt)=׳1fnǮN ǭvo>ujaryǑڵ^8.ȜLVnZ9Iٜ'M2 |ly*aj^t%jcrRC9!߭d.V0[<`y֭.# V+P+xy0-aMWL:dRs]O L"3Y:.t3-OajSrs!V-nXujr~:=*$UU}y +ZeΑ6cz{E90Wt+A=T^@ͨ콦>f0o&SVJ5b5ꁐc|Y9nXɼ*MDDtҢmtQjóI:k@yf(jw''b߀\vOec!DʄHrc2&8h1D 2ض a2]o> ooNr68>z3;a몓NR68QH|08o :"JdDKMX Pm+Ri';ЌwH3]lPgOae"[Kl S#9^xuZchdD)pt*US㣓~szGFǤ"9 (TqXDI$ X30e%@`=c &o c \c(6:1 ISA /WAyѲQؿsi:iDÔX(`D#0p Tb؇AqM3; s7t.SDůU( \A_~pާ\/?\yu~xKƒ Ԋn{RHu@~ލoG s<7~4UlGn|粜S^矫֓h&z"=Soz]ri8_>Su(v 0=_W\P1;]sTiP5܂03Q%X!;y4zUs44]E_=bXFuTwLb6ʝ67ΤlCo ,~y1 ?* }=v3:a< n &Y-O;.{5p\yi]Dd'3<^>m>¯|xл3<߂&w##ѐL+ WoM|s!ezv?~^fk_@Մ;0+ˢӞܓnѴSڥ7!hYtc,0Jfxq I_@x/m}%`fA~;GJQ&/߿(g߶kđB <P7uD '*4ThZDa\ֵ;td TZO1*x`ao)-AtŃ(xn$W"xip&TQrGzaVT[fa7 o{ /yꬫIͺ:NXIUsl+9] ݤ/#θyd{ lRΖ\svLm3*AUÌjyNX8F|޼{0 HHWM|r/=}\@z;'kܟ\0sn1CcgmI %ȗZrV:L0ۚtA^S"kUzrZd|-wI;zalJa^iGj]i@p)Oc.*5X6V'<[g*rZ$ο_Ct~+Pt KUCg@>@UQ~syՏͯOdh_K)xFL-|0 ^:m1a)?zQk:'OQ[MLi֏xxoj楜-Z TC`*Y0R`D$4-JM>:ޱ̓vjrcLx THx pVB(&Y3Hpå8'Qk# e(0ʼnr~(gfC7ȧ{~6nAiH kaLhB6RUG.J:kx Y om)ɮ>M]TÕǵ~ tֿ9IX7=K^Q58~lZF ,1\)Dy[ q6^Km+A(`~u >/Fp7*mTHD0aZP֤chK4*A'Iښߝo$Nv rl0Hb}}gEq!G8#q}MOUWW@+'$EI%z~v~rŒ՚p R^<4R0L CN96Ж *ț0bQG"`"RSFDD b FрG!eLD:36wָt+[@v6Н/3TmZLʏn??\%Z21*MF5DLaϱdOz#ZC.+;y- '&PfTet[0h`) J 0Utdlqd2ML-(--vEoYDz$HhJ[kQס`G`T!#&FUN#\oG>S;RwΎTG\rNt`~5n^&sK6aU tIL`lze9~q[dFq&airDձJolXv:ZX "z/7 if3(=gn̞1{<cY=633V[6>c(ibj +ItפRa 4L$3i2.t~M0\ܚ t΄z8HJPT)n`Gg'Sc{ˌjuzwl<6[ O<%7v8^hؠni/ΠoǼu)*eV]$82 Pe{N^$r}VSUQT/+6ᬽ呡Yp:8}|o( + C 8=1efLpD#bb.Ee},)V|eF8(vhޠ̰kBz݇ڟ)Ԟ`H[m92")q &b_PmbZhj@t;wSmk{AG3hv vfjrV2L{simaNTpAU\q rYkc y?ߺ };P?+4+@oRYe /"ᕆӣ]Sqyp$1'XhcQ ?ac Y%i`Z0a ,e$Wh>(&e^C3 `szGFǤ"9 f4N+!H&vܾXe9Y [!Cid%2 {MNLt 2}DyT‹UP^}^5=jjHaJwJ,LIe0,A8`*b^RHC@ 83R^F:͙qs;:gm"oZ dz 7~~߽߿|~7װ <0Ke?ƿxn`RjV&I?xPtHM@1{CO(0bhXIUf\s >U}r RN9o?|z\:pq]VW`q2 gWwqVj.?o@ZZ̲x$oK-Z~_f*48&Hٿ[B1)uuƽ LбlVN\Τjx&A3b;|Tj[U§pիx z*|~SŞr%+ҫW.QG l\Wna޹nd;4^ sw|Ìʌ-+/`I]Pf4,rjE\ci}Gk_ڿqf3=~bu;[f{I[u|9<[ -.q7y"j{ݢUKctA_>"hY8(ق Ha7mYjVWC{NjE~jf~L+m q$(Kč5:jGQH U1Z1QA7GEE`ǣ$ ,4RZz6๑\RGE8r \GQY:&2oXފΧzMraRGTPZ0Edw1#cQs,6wFBZ~되|(Xz OTn,_t3򥌷a0$ޅ`=s16lr)nW_ǓOd w|`~ETs4JᒾV\Y{ہVu_H\H \"- R d\F:``ɠkthD?]'P\n.y1ϛOWW 1KS"` .,E$XLjAF&v[Ƣc\,g÷iZmfhěP/ۏ~ D/t]̈0e/ `BK 1Ӫ$db.tj0xbUmfcTutw9bo%T+u|}VW FK7+K߿-/Y~)Wu.p([E /&ͧ0\'|p o ~꫿uE%sS7lӀ;(`E+\]/̌հY3)Z--B+=]+rau!\=cQMa)@A>p)I&塐$I3Ʉ]xpȷk" ^6"[0yRejSU];<%T(mȤPlؤO8arm1c69xOnj|JF=Ny)5e^s KMd$Hr̅,]Ad/!Ma| @RjZ(E!"U0" #N7>ԵTe>\E*IA4p%HT#! D}LI6.@=/DLtD{^V:d,&h j4!XZ,3BR^便k:ғ P,:AYt~FyCT5#np?ԔC[ʆI)ui8ChK P0QaG/1hb[nsg{ƹE}Km |&hlJ# "V"@!qK@ ZANGAPIG9`Ť%B"AEԈ H!UR aEF|m:-sie-Jv$`06`%,]: &hc^hxɌ^+۳@N-[s:B9Np /(DL  zj8E(2yD`n)^* /zF"2|r -*uʀȁDDdPI16r:Ь=nr+ܑB[_Z f3vPjen2@"^$t[ \݌ŻX !'E0(^h0#r+t>TA5庻TC;W;UPTs\T Ɉ*mV CF8 B#Ÿp&h%pVX+C0!WVawzX$"#zl5`h4 36+ܥϧj HlðÎv?̋a^KNN)9׏3zLt>:b6Ty,aQǔ2R|$ 'S̅E4oM>&H 0E.(qCvYI(*][)rkNiY-j"Ɂa4sXdw %;&̦ᅒh-Jw yƥ`bdF/cLrh{j]w_7[pWz~i 7jPha4{Phd0'ZK(Ir yI=uILBDN2`|@eE.qVq`20A3 ('@צ$O#&Ҙv!XcOǴ T[KG!1z ",iP%LR#1s)$5{N9Srڣ0EkAhg +=bAQ_3%]vG$W0' /6cZ{_Mϰ@N7ds/ɞ<~[ٲԲeOnQEvYz=]wWRFosَgA>E`d!guJ*y(BR(k6)N;y#QvVvR[ Rd>a ..&e@ً]3t%Ά9K=?ӻ>[5; C@42qƿAEJf++ eU`Faq2$ᇯ:_h֯fYd29IkV:i1Y%tuXJ8BcP'e4%U7'ɖ65@"(EĆdK( :c١+dCMO?FwM61@ɐL []6֎IlT2U!3b eV- iĶ6mwg'uwe0hZ /_o|7_oc?W<d6v.ߣ_VVnyzl ?̞qܮL  `wƣͿ6]=xrɣe&.o?ׁh4-dԻ.~j>"]+0ʃ MY`qV,_/؏YKtݠ]Rk?Z~۶›UhzVDx&) x ~m)eZW6+h/ml̖ECSai{[Fmaq =eAu_.JS~Q[۵mӊ/B` {jMequݘe6-P_a)>oV$.O¥)/wR#ع`gcO&;sLeRE_/vKo*߇dݏxZ|ӎd;Js+cq9O2܁9Qnnu-l+5F.P?lWvվb`9DMMf\@/@vsmaf]R"'u)[d? ߍg`XZ%R8 tk夒t&RDA9=2>{5P=yU? hhx͎cJfS:XLW duRHlBѾߊd+*t/R6RϾN4[0`u4YW  v>5ZSiX2lE(Ņ&t/DtU6ډu϶O9ȒyRP-LJı@ђ=&@F:Ԗ8mE1S0f/Ţzl*'LL:eĻs&t%8tH;я| cEЬ'}rc&Oo pv+Lci'x)4xYiǓI9o~^v~'(bH)D=2zQWigOaZ 5j螻Qprp+ l{DH:'eH2}f.J 典(6Q(j gڟM>);JJ/-э*HrҖ"e,-QJe(!0"n+~5hLN9%|bJv%il]*|JEd%MT&,_")I&*mM)ۙ6C`lOvlg {dyȒ"8S(6}!6>%P,ڻVJ/Do$vkP&%Iȴ#WuĢDc;L Ƕ5EJe>mho$I_{),E"*ɺ痌ٻ؛@&|`~jk(0I(h+ 0P<-YHLju8Z&mtҖc!@v5b|ȶ:6PU%xQ"q 96dLh-.  D0Jl#<3%;:biȶqgz }15zmnҴM 8~HHM(v?Tӗ\(%MdODc2J&Fi<EHOI,a))('%e+Y> l=|(KQv+Fi1C ũ"M12\1?tS A$8@$Z,)\sBC,2;ͰIB83q6([98OJj?s˔ +GHfI"K)NArZLʢ/} 7AM[X>9:9x YVKRۑ#5{6!dI\Ot B0 TxNZ9K",tB /K3<1ڮzô1(;Rz.+/* 2 a jWv>[H 0F$I({ٶmxt ZRݾ/>r J:_|=/9~8r06%l08Tu4'9ׂ򇽠V#3vVSVXr>&Pʅl0$hdڤP< jbNWDlဲM RrDP1DΞ1gWDS;^L Q2Q?m;/ޏ[g.{n=›ߺzЗB)t>b7ga2^'o=nuں<=7]?glo-n̵u57|YbX7J˻a2qŧwGo}D#,(\8qVKNO%wwK\ttӀ.8.<&;gdULco#>6H!xeײS.5gËQtD7'9 TM fȘPF[\¿(6s ŠޕtA[c0Yeࡨ36MIe^פ}=_7|75'C^|gx{}UkĬ7W>?ڿRm{DPςtz V^{Jw]#7êה恈+ W6? ⛽mnfEwXʥ%Ѷ|" I fOj@̡D!@TJd&ʌDz)`(ȧ)]CL F+V2E¥+&u%O&mr'wtxgzpIn3iXf/:Yʙbu8IY$ϴdcɎ-ـQ@c ?R޺6+Y?ux e LӼ>?kЄptz"d +Y T d0Vl`w)R!b@e_664 lv F$U,S."YK}m/"L:Ę b6l;XdZ4*d,lQR5crXL1`Uq)1hU&tack{K-}Zڳ=E)3KkϽO2y$ FY&[\۬nǃi8}?&b_ /Ahe`<*ީ33[Joޱont eu>LpiT#6_ NyF\ԅ^h%'VV}?n~'?ϋ|y`P/&7߿N\Ӓmkm2\y5 d)d2\dۯܨVN*{rSeʢ ِvX|um>[ )Ɔd *o569t%[K%osf79@8#G[h陛E2Ntr3}:sȾ+CP$1;/|T,%.036څ.g/LQ 9Sٞ7znWگ<Ѫ[_[6&;R4*'kD%Cc@H˿svR+߶:̓v&۹gݓ!̐AkДdX;6Bk"F2)!B)d%Lm!!ץ$t%VՂZIB l"h,9)I!$qtgܿs&cضyH[| }zv^gڄ!K48D5ɛF:de#FDNQ?vT-1Z"Z1o(`\w ٱ&h0G/HdDHiU:$Cɂ` $E9Hhe-hP50LYҖuvΎ7S̳ 8_`*xv9'A ۝`}ޕ$ٿBt/=ܽvB4&)X!%DQVnU,VEEed|qOGjU'vOt.sCe2W%_7EV\ID:48({a>Y pM<,YM&P-e trYP`#(g -V0x1la=βbĴTk=B;e\ a|y^mx*OZs&I,pt1CA4ׂ3b<:xM$妅U>*0JӬ&$G ozL pU#J˓g)'"ge_O^"*{?{(R+eSd;dQz~_J+r)JqA 9]:ܧ֡96 ` ?$Nc:8Ty*0LA&%㔀"` 9HqR:3agԐrOrU(Me }\j+E6qgTF#u0S-^f8nT߀ B&ڊ- pQJ_p&"@+"z /] 7B-P)9{Rf*fIdQs񡣑[T)W"B ǚ8%1MRF9A ϖs|xy*Gyt j%ԅf?u@rX4TS\'*CL$Kr$5(pZ& t)N.!&DSe}P\E,0GA0;u$DrpgV!g j"Q 'gK:wz|#Qg?h_NVv*iz+w1:3WYweg9as$I R;JaPDRՂb"^!<HhԂ2yU`)Q\8 2ep!U(F 1whT)װ7&f ^; zӊ}uOc1긢Ϻ{nOkđ4GGw)S>itp ȿЩ:󄸀sk3q0 jyco 󏩁^5||wANOoW\O["?{2hխvЮSJQv|=>"c4M;y} ȢwVlOsZ{9D7dD`r+Z!eoCvzDi5as<]X,Y s!.®޳a8Q\%Mӊ!jdWI[ԦasoO y&lӓn^?Fe;Yn!^fev;)^Éx$4n'_F+Vz&3Y}/qhN|u<;pݓNڲ,;ifqگm&gm.wr\23nc}Ys;ܰĭNPiYg!@'}튾 UTO;mYK偡aYq;m(q@nݗҶ,:@r2$H\(8)8BISF+W,$d<ChF=WN{˺dl\k)D5XxxcO æ~!R57KK=Ea,ev٨]X da,,@!&V,Zbtj˗'q'h>5>߹[ϳb߷1D03N#dWZv$(Ͻu}^t0~QmV>z]ʾ 2'R¨WwtZa_t~)>y[o4}lj'OO\0ȹ!9]$?~r~ 0px}7f OpǏ -`_^IQ~{[_?}8;8~F5WvHVk?҄]nO<1*"bJHd ,Zpt4B-#J:AIӲ^zΉޑ5|;n5?]MW9 Nxų!U 8 K U'Im4LR+/ ڈ#0m6b|Lv_e&A |Mp wr'X@JڹBt&0hnMm%bV(\ @B)QvkåGWS8YpGԵVC;mlkEDG((wZU!`-gJF8J*LH#=λ{4(4LjMeD.4*Nmg:%Y :'dNl3D A): R:jF!9NYm}o>7@: &)yHx-y BEPq5R hZQQw[{Tƫܛ6mW<#^chT|RP]4$aR4`*^K7t(״!@9s1rIg! < hṮ`."b4B8@ Rnjk&g2JZjw9 à_߼={uWgdzޞQfNqz"B.|w WOV;mqV'2{p? &nޚͭYO=Q6M>^\;|gO_t0ڇ/Sց:kųCk„u؊ӯة7XCŜ K,r+Lu~kt 3tTw:޿>|rt.ݜpJD):+Ɲsb1@}h{ ¯l,@;g|쑯VvϧN_٘em{ >u_y5/-={6ne^qcH uZv7>I%9{yLy7s]tF)B>O`aEzIy7JfB?Ok7f&eQkq y8 kQWe_[QȐHcm/ޯEKEIvSéYONn*lwIxn8 k$1+#1ˠ-~e~oz?PNqv4MCF8xg <0|A7,ܤ䭥{R:G2Um_1+xtUA|$)D4 "˝e2pBQ9,E5uTdtn]gNz򷅇EcK0`)15^u0+f8'6T B**cRaL>F^5%ˣ,R̢9Œ(u3k)S2`H`QǍSO렌B 2Gփp&a16RxT5 g3s#4hP\TG@`*ھv+fxC!HA<1`Q7#` UEsB_,1]ӧ,>ڊLcHKcʵSh^Fnb\KEk!IЏ:45šq8X@jLihf@kK %i1hN YcligG̦557<b¥!QZ@96+pz&Q<; `Ve!e%9'4$6Zf6IhS g7N忺]o'ZJ \PX'B6 vHᛈlI:Iic2+4Q{٣AhcΫh5$%)خfBN}*Yp"}FɹZkݼ(I$.&p%etVWΖCl^7H:(-WiʘU%;Rz.W1 d[劊g+UK#P2t;!TJƉ"-k3֠e͠x@Tܽ0hg9> Z_TW6hB, B\A]QU ;eK'<.egoWQ*RGxФeRBE #$28vW٘Ώ<+ m0pÙ>0ʢ<ݒM:ك/ˑ[vǎJk p>)%HQ$zlqNv/bPJ7r*U(I"ڎ6mgv!-?v()<&i<&QIV`5ʮ}Ovb~aiRygZ?{wcdӫ?/7Ĩձw N\VP=:goܴ|Tw[3M;]5o;֊ێZ<_0 rD7(k/ WOXj?҅CdFɘPFKW/c&R@cQ١`5A䎵`+͛^mҭ<]3w}|[of?ܺ*n1mn ߡޑQBP~7gy-hHWlQ V@2c|,%aѡV(ک,w yVلUhE%_Tͪ7a\}{}?vg<Jxjx"tE(Gs 1z@$EmJ ^7m\z\Jl&rmj1̾j1CR /YBrQ5&$z)`X(OS[ $rcrQhpuTH>T;v5blO @u*qPی`iCNpw^SMBO}9Y;[s0A8'w-ˤt!"Q9Sޚ4[VYlb(0(oSA"R[Nlɹ(ƾq.U%G]),eFj gӭ((x.\o Qt]p9lן~B9%oӴX\?C..ׯ@cPʩ<ꚅ,Q*Y526>ž`p#zSZ'^&j!aERt!:%E)HFt0J $BEg؃"B }e+1@c5iTEY C2d*r)1hU ]ҺuI{NI{ BQ{vChvLEUn;3?jEaq~4vE0@<C*h|Oq/'rO7*))+JP3CaJbIP ^D1 ڤ@u4)q<( aÂY.쎮Vxۡ:e@"WU@h{׵nSos n=püy:.f:-o6PvGݝW-1=z^i}=L./wsw{gguvK<8x9q~,R*n4o%w{SGC:R_rNz]}#(N2:YF'd,StN%eB,etNeet?v {Z.W' _g73âgiD+|ݤ! UUEP" 72&FΒ ‚Q,FPlG;XHkB+)6A?MaMaj7fMSf@$`3k4坦ӔwNSi;My)4坦uNSi;MyׅӔw:k)2e'w.i]:My)4坦uà+;My{4坦ӔwNSi;My)4坦Ӕw7w lYwM/GtS}~qP/-f#bIp?..5Zh 2`~`WI*%]WUz=: ì[ObVKa;;ԳܛWKwN,^iF;1_utg0oWr𛳕]_HIbIZ0 # &]-*AXHj"*s$(( i)x*=qxѳWg6&CFi=jB3Q6;[(c2SfOQxҴEi]K דQ fY#$T"js2hXkyv4}~j i;YgT/FvH!*|^ʻWϓ uqRPCT2 y;z Deo cImO~Aj&3dP,tI]өh-ZEA(Y2Pl 53Z*"CK5:P)'^0BHqZRc8{fyC`pg\ d,AjHӁ~ݵ!gov"Kzo{5i?+k%Y& zHU9k:SZbK@N^8)] *g>mQ&<3eTT.g%LlBi Hod@I)I…Po ? g?p5핀O&سwWX7CKżf{,FJe׏W-+mHؗ}(wG~xfY򈔈fSRvCKٰjV**+#/#&bFѸL  5JHۻ8򔞻YIM@ ft{} mهKPagrc`,&__c/n/lAq^u vj8o{|~؆ItSO}-ٿ$}T[VM/l{sЛૂ+~Mw?N=X^uokC%ʵ7x?U/_B/[]5sfVŬy9\v՛ +VUVPՉgsczn']BRK-B%l\I/듕[KWKw\צ3WD},,683ԃsĽӳ w.;.&VZ^h~EV[4:oy1<'+fVt6M}{{vy[\S*8,D tRױk\ЌEfهZ( EMP@n+SicYEʈ qTR2\W% OD$OEb\S2EFppw"=m"sIyN3؇|Gu æ۪^"̹NT Q!<~Gk"Ψò\=u/hv;&l#Y2@ Xg\Fg}N1 d ⋅ez[gyX==byyn@v-Ұ? q/ Iv>'ђhF7pUq4PvO٠p%W E]hиDcíKZl{[$]yzJ^xtGQgCt#h;*!;-i*xzIPs%ch`Fz{|D\&2F"Ǡ0wAxgr.AE`jt>jKS$ bSm9{T0PhK%]k[17r쪦2? C쮇-||h]i?tKSh0,ˋ ;BFpQA ^G("aP7l" B\0zkuQ1*j< " X2+04RYPdz!ܡNa%Bþw1eV}W<#^fhT|.RPż4$QhDUE8TRiErb0^S.C8T xPcF]DĢiq8tUt̢{wdѝI:M}) n`8$! &򏟯?\뻫/rcxo_W27\}#f`v69>@N@ [kIJﷸ57b[O=}+J>r}\.ۢaXډ|g>8m8 yA>iS9? fI`ȹP7 n'Zӝa~RYnZJ5?A G>hSj9w3{nn IDf)ŗ6+\ilf ~tj6c~pwW > }, n v0޾-[۷^wqWrs=ow ~m| iN²nQ96^rwIAb"1?~txֿ6:/Gsw!>NYiI&Y0nWMF9$5~vtHgIGw>V r.ez8Co?c{yM &!Y4FJOԇבw(mJJ.znv1_u-g5NAI H2h3e?qӛ%ЎbH͍w63'xbMJZ*'\1!KH@iU>f{7Acu4#(? Q:-,r $}TKQEz'؜8rOe$̯wc;eA΄ XWBp*몲*몲*몲*.mWUMTUe]_YWuUYWuUYWuUYWuUYWuUYWuUYWuUYWuUCʺʺʺʺړuV^WIM)acF\Pۀ1Z8SjA8)O6@X Y# ,9( VRyR|~_DZuU|`ELqJB;ћ5 񄧆Ds"0V뎀M7nߎ`+fQGr0%#$Qi0XHDR0 *yM_56j&0ӯvKa=VGʬ>iɵS^Fn8^* MX#]'NlGM3 K袙-*$R8qֳbg[e>V+[[&* /[(m@!RĤ%Aބ|JW_2%|W(?7{KmAI/\@Gmj}$SZV8Ik4(ac!"t&{+CAK,K?j84"w_"}4 D 4&?Hr'\S`@I4B(t`\V.#eZ0?6}QbSX3o|@ʃqhb8  06h㈦RdDQxpF)T(K0քۻ}юb3X~IЇivx=%[ ' BB%\l(a<%)76 %E(FE#DjgI @*%Ἄb#F1($9I4 O3qDi>Rp`x^:ԑ*z+85d1($ W:r!B` 4pz[IDhaZL8uv4 {ZMŰB4q7`u$xBrt 28H.}Ps-,HHkY=ZsR5?" M'$4ZR.%LYPI0IHlF~޾7Im|&7{ c~D9+PL='^$н?/Ƶ~̮k)Hkt: :ٗ.C0>be:ѳi+::L8tcbK4vF?e%Iяmi8*H15&dԽ4>tpicӕCY+\ۍ{ 5F_n౾300L' x^#Dh)e OZAzbͿǷ,maմ մXyͻ, &e9=k%5{o;֊};ۊ.,% iNoG@Ym2=\Ǎ^u| .@hTFbaf1LE]_z>^=*Hk9(IZJW (po:"h%TIn+*ϸ/fu'}ғ'GscLb͓}wa[v{x .8Pո8nԄp>{J'q T#C|g8! KcBQyVp,wxчwMP!{&\w<Mϧlkw?MًU%=^Q_ݘ\kdbo^Pcp[*PHV'؝Jr|yokI9\pIɇtzJR|FD@O&wʭ"<. M :bbUAl?^"ܤ"2KXGьKnr%9/ךPRJ=_Or'ٯS۟]ɑy]Xj~mAѝfu9yս.6oRob gVy>Tx}2#ZJL9bP1\ [{m"lEet1yOr"_ MFd=C&%4DEwJҖ8{0݊R.b ͌#mPj j g…mvsi37ƍFF×o1@>Iϵ2o&Ӑ !6#V`0p%%o0"ԦC\d{8f+JDl = ªm6-v1qv[l;+eojW{XT% H*0xEІA@bLPr $# vDYUU[k]_pm?NGֹgNjv|ei7NB\ErфXAr&b puқW1S ̴`tk͎/vɷ/~' _W)>Lb۟ey잰/騵pey}dM!O_Ͱ0>k:;73weˁ7rNDbjFD{TT?SOVӰrDL1o(I!K$}\o l)GD5YW]*njjIP̱T '%;RU1Z2V79u>B_Bl+x=ByvoΗ?Sϻ`:G! 47VI\_O/Wgw`yQ((\\n1#*d{Ft'Wsѕf= +ZҟڭqV1)Fˌ)fZS-svP5|rG(t k %|wW\zN_CXw[^nn_ q DoGo;nbZo~qkہ~X<\]/Wt,zIz^? \*oůHY{[RvnLV3mqaSdݍ5fu {O֋o}r.&e&Ru_OȞHJ)}vv[C&&:׼\#|B>^m 7$/.5MFAtBۋN.ߠqp6n/k7d:Ze &~"ː&d?N?{ G5np+XE7 i]uXgLӄi e Ҳ_mb_,|[1䑜40_c!*/ULIJ!9 2ϰ]g!]vQ{nTmr¬̷[K_'mZI|tx #.Ma=P۪dpSt)\]h)7VSc2(3Hv9Nf5:/!ЃcF<@ $9-2JeVJôL种%17"A NЭNxS`mmG"YԏD?ON_żcQHYm(SJ"8dxǧ^1l^4d*"u1IWCj,זXkp`SHcޣ. tpC$*a7]_GW1#TriuhL)=`,C)9kp NAmu4C '8D rrhwXjy3r@`X%]H ΃p:PDڢ pd.."(άA`&"PM ;#"*Õc<9eA>F,~IT26҉䳦@k6֜'t9`l+,4Lk$$BY xʠ63U[q/z뤱f0 %l@k;uZ D{1Ĺdn Fs0qXq{yg7iu:;5bi$p;q>kD0u ƕF"i` t*m87 FZAVFbj.֐T{r$RV  L99 `-gtm  xS94cF6Jtcl9D[=DK0NT@=@e!% #4#))W!`}fFEcQ!|^/W؊D\d>WIE3I1R&xyư apaG!7J5Z1< 1xT##>Q^;g5(D t أtnFDr۩Eq:csjE5q5Z'(h7 5k*E\#|rRΤXHLVWMIB1X9ݳFKyנ\U9ڒ?샩x̌Aٗ% 6VPX?Y@IgP #f@D2^HO#0%7aDD,St1 L|h rlмȐ 4Eq+RzbHȁܧJJ Ԭ5KW5+olWy3__qb*xt:fv, C()%HijYzS7x]2@ţ(YaRkliZ^ǵ 8+6!m?֎=8D5[#th8 Ҟ >R ޟq%Qm'lW:9گe~-Y_x~7Sڤ35F\#}܍2BmP$Ձh4 $Q!R}?'A2fĨ0M--ӐS DQ3Bz֝{s9VN>V[T2i!_moeI~ju?K=۞R6jtN X\E@\>edFzj3Z=9Ow,cHoᅭ3V.ncc\v-Bn]CzfPt:LNT"‰cPF1 6Td1jlreI":P`8g3 g}xyqqU'lW?tb<Ykkez^ez^ez^ezgI{I+̮4Yh?V[F})ⴙ}q`%3w2T U)CUP2T U)CUP2T U)CUP2T U)CUP2T U)CUP2T U)CUP2T U)CUP2T U)CrPQ>2T)|'p:2TmP2T/!=L T2J&P*@%dL T2J&P*@%dL T2J&P*@%dL T2J&P*@%dL T2J&P*@%dL T2J&P*@%d= ?^WSRǥ. xn۸A I gĥ 'q l8XJKKH\Z}tq'#z.JLJW @=IU2P6uҗV/>Vj$zq8܌NQ7Mtw2H7F$/VZu6dbln7_z;ǂ@y)eI(|MޯyI޿oִmA4h{ BW'[ҍ{tcv}ŀ͆b.ækpU 7C{/gK^_+,B7nNnrd=n/J 4P144Zrt>fn-6<|YBC](s e,y<\ n#V!g&;P|_0m@]WDiPmhp Q> ;%./nv'juaI%TnLj)=  B'>@ԉðq31{LgǕζlP7oC-z&1~7fLu&nHqd>oYh!);yU%aȡk_۹ ˋ킧T(_Uo&I 6%~Wv[Xט.Վ2EY7[]nCaH|sqB*s1A"e"be1F﹏4Ac#H$P .*#Ox n .575|o g{0bFMUwp|r9U/enҢ)8u*m* @|m4s} %iؿz0mwWB־1~?|oPFm]4~0o|_=rVqt(TEmPֈFIH*#郩T^ګՇٗ T%RHC+KFDoAs_TROKKgs mzm^Ƅ )S1GfƦ^1VD(&Y3Hp GG,G x,9 QoH?^:n#힪/<6 0N[Db\ cƖFjDi %׵9}n{3K6خ^ {w'Mi.glԝ}t%ޤ &&誩I>~ `{c{"5(vUڸRzvym`9#3/܎d|d`#=&؛p=:;ɧև_ >Żf>|2!WLhkCR/[n-}t4CnwՓQX[\<▱ե/.3<WX1t"IW2b|>97 ZGyFu1qV{tHSFZ+ Vj o~bxF 7W,oRǰ֧uR7WJum!;HI)qF0l"\ٹD $h)>%+/2A>59r_N%Zlu D0&-+¡+w3WvZ:@/9ֱxw.aϺX9ӗQy1a40o%8#눓:#V81Ѭ+! bD <* s~]`FʩR ˽)9t%8$.-X@jX)ʆ'RdrE<>笚e@Qy5JD" R@`LI*MM=-ꈜh-JB))&zK%@^Ȍ ^`A ƜL*7gN?gܮfÌ=y!Cyya7_ᅇ94Ik]ۛ;w՟k7՛7Phq4_9&$ 6:eM !؉ch1H,1+ǦFې? JۤB:`^esEL2slsl;%Tv6(\hgF.qVqBI x rGQpXG ~Q~"1Sc_3sD\8b9V!!($^2 :łK%&) ̥x^ mg/V())D1,,i[m!sm,q,5`L-EBLFIյ2A<*NTO*E<6Hʽ:$T/t^mT@%E$8*l$0 H$(^,Z p ]mQWRl6.1!к=x|H*& 3: H󩛜\;7'(j}Às"˵qWiݙgY/9eQ@'$i%]2e"Wb-kTeNZB+)iaL1 @r^{m,B['"oISˆ^FcD2T`"RSFDD b FрG!eLD: ϬqO[ lc7[ueSۡ9Ƹ^DK [P'Q(r qk #̙žck)兏 j-4*k@QAsgцTR PEzW@ǩ!owU]')ڂ ;Z5BRafϷ;Ox(KU8R1yebT 11;ڑȧwI&qj4L qEm% %3_z;ǂ@yy=tM>^:c  ī6p1n<}?lLiͭuj7{s2_ԗӫw 9I9X#& B ِ w[`=]c2h`i r풘7@wnɧ.tt߅2Pf!B#˺P82axfs xUz;I([m~a`GIqAU=qüpSr+.m%{uaI%'|s[ _X>Q'NFuϜI5,/ <0H]E3 ԍG+v3EvJqd>oYh!);yU%aȡk_kcEu]Z(gEpěq7pf߲Ya<&HpBLD1F﹏c#H$P .*#-EyN"Xa~3=޾pc޾9z[,t꤃W,s0o g#=~>U[\P ` U&b_QmbZkj@tN;\B3~JUz?؜ry mmHFKV2L{simaNTpAU\q |Ykc ys͑r&LI8+kǬQFYJy>%kYf<8; '/y$CWhm tƄW7/ ӧ[N ~S6d sL17lL !:+$88= L &ZzA%}f/$ÅʍnTxw.Goك(^rHd 3PiƝVc  B' L'cA(@0mɢG#V}lb!b42dK~ R'&:aPS>A"b*hE*(/ cWDs];LNY2 fd;1DE qKjKo/QaT]&9]e9yú?ַ62o7K<qW8]-uSJSݖrm߀fq%ȟ̯ oUӵ-mD] o ŰԍWwL:@siߵwdN\;gR iٻF$W=wy~g` vF-mI>4Y$ʊ"3i3AYB.~ Uw~>;nWwRpP8{h YކNa]YȎNovx ǜ$DbX8>v׿&7->}9}N <tPTZvHiE @g {?Ve<7#ߧi5"qAW̬BF:"c l@@n=3‚mfigGi}0KԿShl5qw@2>/64`8.5\`JzLmO$^>{vݣ=_GF/5_E>v勋z{{GoL|3^?RV6gpe>\l8!g[ip=J_^%!&euɃl<9CI/=cqzKWw9jTLnR'oh1fQ.$S XA!"Sx5FBRg-y>Y!'+F+ j~ /$_R+v@g]v\\[ŝJ$s P;B{YJM)S[r7>ë婫lgM(}6JɼDN"X 22Q VW۱i&"'x 㙖hn1Ȅge%٦d> PTjlig%kHGt@a[K R&s,j™>gH5CRlh}o5MsG>g% 2pHA+-.' ! *(|0)2fs*B0qR'+*@TGIZ"u0.R=*lhy%RqTqpzYyp, U%2N5€Nrϒ$kXS&paގVJ8zГ_5le[p>oaUE\`8bR5:f@8/g4ԛG=X&nb4d_cџ&ea nfޏ@tI^lY "@,d#%H~s4|7k^"FG#enɜj K:D5NQY^zF(B(%+bʡDA h^2RP,0 Ch"S)4֊G6qS4N]0^ᦣ +oyNB( 񌕆)% lP>텑hb ݮ!-; KWda($alV2 C]9Bq0#>[ULĜ2مBܔf6Ķ,&;貊`GUJ%ۮlݴ1#r`8G6 @B:3 bW)3e1.̎IIBoc4Qmګfۊ$B|3sUnw+qz/H{Թ_-k/<OgF!=gT@`c/%r|{߷2Z)"8"<"]wEJ{7 t]| zN]'i6Ș+tW+̩IiuFj\ζB2Ӂ/3Z-|-wN঎ֱe*,(&dZ4)2GcAShJS -Ů6{a9:(Bigޞ gFD0 Mn`@nZKgg!TGL@HJg~x3qq[ܰ!hf5i>L-NLCurUtI* GAN3ӂ~&2{i>U^LfAPrV9h㐁aR}dtC:y&:t $.ii@/Z'$1ʘȦs%YRcOB^}W_|yzm=uy+i[-N:9mϐR.ћ #x8O{l?ŕ3== j'l]6.x@RLȐIHA$C&&4OMKUdd\mj<۠\_Tcg ҏ}="T{#n:0]GD i#)6*bh"s3>`eIB*K#P*{j<^g--/~~vY;>|Xw3ݪZV7Oz< W"_':S ֩d)DCF ~;C'~-PE+ c'TS]S="2].R*D̙Ñ5JyZ2Mrj0\x5Snkƞ+y+:?Uj#ySeѕZ|3it@rB!ZIcRJXi=keY;ԗ;ﭾT;f٘/O =_ކܭEÇ6qh\\[|'Q3C/PrZ,UsqKmtMs3*)Fpwwȭߖ?Ҷ<"cP?6ww-gM\]WlM:42rX nOLtؽ;+.Gʦ2˹$6D;#l`<= c lyDJZ) !")LV n"UQYq|ÌǷl3+yͮZ0>\L㏉lx:^ΐJ,+"CU\s(i!)A(Î!FNwr;HO8_Koѩw9`f` T'ʑNFg}r\n+_s\SL]~EYƳ4MC۸mф#_:G4˰f#zvܼq].[{ƴNDž[ЗRM4:ݫ&erI#1%i03\2FpE$V0^ 5^@f}Ay&)(Wtk:۷.Ӊ<)*y lч53{ZLz\ f/%Ϻ5rĂ ő.0#^~hְ֫))eRk4퉬dmk"ӳ>UZKZn)k*.8\)f)5#ް0‹03(xxt媣6fY]ԼgmU=E5dtLNxgg,Y))jΊyO[VhOlީ BD tң]l64_>EIp88}rp Y> Z99RAJL2sCRykL񜘜b&oonote ۽f"l꼮tPr^'JQ=~Pҥ?_ޜ\ݎ#6JiQ67aO^A y#K3J6Z:.SWoЬC6>.?˲F^B6ɐcx"@Zf6%"`óu(aVAEZ ӫajz(4x WGR [`۴ȭ.ԭ{g8UhϪqK尋zf7->PlPAN+h2=:RB[w0HࡼYnaPJaVm$z}qCЩC}~e\h1{Y/RDAhcS9'"$8XYGϴѠlUNpzE]{ rc Ԕq>u}AKr{"{:dŤTVFeM <#_^^x uIvݴUmk9ގ׫V><@<oNh>hyy}Sg! 9 ^M2tí$x2: ]ոcy5^;W(Y0Jٶ:0mص\8$#JgY'F-3 RtQ|cbY`q.Kйȕ*r!C0mȵQ5rFZKi񵷴giOOsAӭ6=91`}h} @e{y79@P' R@#TJط5ۚmͻ֜18ǜtHƢ mQC |rQ "qGUP i恆BTkҔ [|Lbi1! Z4M_ʀB-.T tցG<>f{Ƅql4HdRC`2gpU!=Gs#{]L>bc}8+Z=ɩ'7TFAJ:Et r\gB4Gn{ we!g;\8gIHC '.୷l r @HAN%Y٩FΖ|!Belȴ]I_7hvW_bӧy,s o.NM3{J^xs &Ƙr(:xLBT4A=^78=%:Bȍ&7J(FFIZ'Ut$*`~: +-Chws ;nA1gAƒ8h!i9j{t9YI|44m#H)>E hM1ғwu$G9xM/Hui6՞*R26gm:i<Mf?}~S~pܞ˻ODK)"7B%qLq׌A` ѥ?b.Mכ_ZXͥyͻ-6ו5oyu[me\re%į(rKJyvl-E`QW|5~? ^NqvT9^P5oɎ^y,ԁڐZ ?=ȯoo.foMW^"J$ϦN"it>^D6 hڒ;1ƏK["8S~A1ei{۷wRpU(3ˆ`l۷7󯼍>KXU4ѥ~))WP (чN77Eǝ2vx3#!nv0p\H_6`糎2w~.'] f%9' q =QjϿ.:MzV/v)i܂':r+fj2yy alg8YZ¯ȜJZqֆ~ruv2g2I]vQDnuȘI V#A؜sB`Bs4ݱc$sڴQw5 RР$K>N"QHe5&Fϲ2*B뷢lEuh5ر(]Da.lHu>5Х.Y+0`$z$QG];[L+OӒ$|pˊcl,ldR2cߘ@)enN)k)e;؁v~1)9 Jt-K)D1zggg_W˞FBH@nFq!pw!Rj/vD &-reXmfzCΌ?n_+2A*.IǮgs*V6և殴ibzz"JHST" (Yn](DnDP%WwT~Zr[ }q;0‘JŴ)5!c4'cs ғTό`HYҀ/ s(]@c GLP23$`*k4,:Cp*Id_.3-pԋɬ \gmgdx.MNȜ %܇ȣH\%} lzɭ& q ! W[RYΌ3 s2dCdj$z9$|*p DtV[f=۳i@ 4kx&-Sb1d6[Dc(wU#gK9.b6qW2smC1T(e2WEYL9 OL=d)UVlE|Rӧ\q jpeg?YT \6{Ujśa'~RGWPZnOYH$HRK䳅$6d`$xYǎS]x IZ#m4%_,pKg0`<0b FCl"u/m""D[I .ZAO"|XdW᳕o^ xTzR1Er JM$#!Zkzf@ g4L<镋5SW*tM>Ҵ4CHyBBlrd!,iIyWNfRRd ,kt5\40o2'}C`>>%Zza3Y2Yd(,pAF(R&ъN[e Q W4/UqjdّS,2rp: ' F) Z r[+"ġ2NF8u^{z}5:ͼfX1Y,ž>:9$1}@DcJuH 4G52} AzI鎤(%IΫca(s]&E%$FGg+rULGd^"c),'FmwhdipAt`k>}|O4Iأ#,؃tG 4vJJo?5IɎˬt4a\#hd~kn8mVV2[̧W4 4@{ĐYc ̪Ů;ӫaju|At >^hpC,B6i[-H]{gњB6lnbn}0?r40*>vs w-^pkܞ@[ HDH1yQU5{1 ƼZ'>؝TLݿrGO8 fճ'VW$םV]=Zu"u<*uԕ|у:Q$4FT# !H"?\i:j91-N??{F_nM|_],[`f>f|&(Vr,b%m=eKv)vǯŢP@pV9}2A$jՈu<ח7d7ppY˗9X=E"_&gW?XIV2`He+/-w5i?{ӹ$`܀11jjz/z 8ywtZKԭJ.d |08Y G-ע2|O\^r0.E t gzr~!T@_:J-̴wx,pl'\kƯo_Vf?[N%zŃ?\OffK}sY|93;I*sN6I$?#(S0gŐ\rNej:Tnz(b3LI1Bbwsv.7_UEV'2J@fVDj;`vhI>xgQj.7Q7␵k.*>ڍ; i'L^ޜ{%R-,"T'r™|9̩vwV@9j!D y>KռY+N9$9[zt.Նw[￟pP8d86fuҎZuozblV<_/ƅ|7q]; MW;lgZfI`;^\]E;.圊yl>xz(:pNյ'ǩepӒ˦K] %VLbWස@Hp6EQ IgF 4&GvZxt]hxJ2* .B]Wy#B %wmiCz2,eZ^%cykb/2af_?~ec|R0X}}5 -Mj7&d ?[{JrlW;谕<&8?|.]\ T+NpF F|Y#bŢ\)!1N8$嚑RVm&%&M߁^l8Ā|#g\2Bzb@R>1&aZ3RWH!l9uURyL/R])P)v}.owOVj oo[É hg-Dz0\/.NQQ뜿 z-{ۊ};0a߶oTZ L{\1kɼrf^IP<$X3BG=v=L=(J:m=U_~6*f[/SLe+߫xŻyZ nAx7b$0ӐDV\Q,0G" U$qHH3쥭\Di2 z~*WE>]Xt#|*ڢģG=ol j% JIf4rcLt}_ٰz;rIU7&7 {'&Wd netGjv.ݘR~LHM&D8BZo l2YtxB)) ZM*pAc*!J:G'}JJ􆣦9hy)) &A5Aw@5R 8C"Y(S=IJd@+v62lej߿Tj}XOs[J^.q4ֺHh)jQy)0vnN{eC=lNH)rf\KLRε&k 2ޠ'Bz KrE*Fi&ruPHY"!ק69k5hbtt*}:n*vw61̗;t2޿ϯDN3Zi쎮+ܽscF >\t%r+i@5ou_"קn.P+l]?w9tfw٬;ĖnMݝ7J[-C~s˧ۃM7c󈶻Űn;:q@G.}yg[j+\ٴkn}7m&ӤVzƕjlC,=>DDPLʣ 纲]..*zξnh!CP%(at\`RxFLQS RpЗ -S2>=jkoIs<=wyA2SRL[Bi C'zeAyfMD3p{Q^ l}0 l7Hq#=9y:|t6")Qͬ$tj˫4=!uy DADB]FzAH>$*_At$ 2DHG4Ϳh>CQ s='Sh_Rt(aV (6HBAb|gA^ l(P}#vbZyx@X1LLZQmQ\ Җ(&ojFU !9Gr\j UO? q %2$R*&TR5c1r:݊RD=ua| ]ׅGՅ|S羹r/Tv4| \c3QD MTL1!6#z 3p%%0$Ԥk!60ceɥ *GA10L>P.Fn&r.O])v`BR8D.$A#ha G"a`,iuYT;-(I= uMKc%Bxt#g>|x6 1+Ɠ[?ՈFF5(V>Ww"4PFYa;|piB_VtlH ( Xz5D%U#b̎lˊ!(Rih'_~j/1OUSJ Cu=RHJ"P%Z3+a`E<,x)C {ܥ@!/Rg-ךROmZiNq1&HG4䓖GU̺7~\|=-eQڜo;b2c&MiR.4=(Wk?+bBuvD+Ǚ$%LoшCCTĽNJd<d@RY0`;5wJEB 3Ĺ$%RX1z%Ml+ qC 40 +3C/Q<_,8C"Mb% zV8V6ԉ1{J`uC}jܱ<+| Ɵ;N5g,98RHk#'tc" %;Kk /Yvc/EAo$rJh#rCMvRA!whs t޿yB++=ʎp^⬂X|,%,x#+5+| :Ge &@{nCϱٙ~-[6ev{"+R9XE~w}TVyj1M"<F PҞeWNiī)rG$̜z_I@lM>G>TuB%g`MɔRs*%g2\uLt_r1T(k׍>jhKˌJ;4 $!Boeg3-;CtrkKK"SL&c$A6"dѢcQh-biP|7lD)00Uٺx>zH1S5MӵzN`6wu5)?qIEȤ 0DrAI𳇟=* 3 4Tk) )rkȽc-i\Y6&x:u.DxP2#@'DLaY-SNg>q~}5~lx8zUClse+th̹<%Yx:w0ΌB-h")I^8$PABe{2Ȑ9=E%* E=a,o5Ŝ3@UbQ'#g/Y@sô‚{v[Mj,dYdZ,/?*l:dhd>}*Q%"D,UIYRʦ+ɁZ{;ڑYgM.G#TWGpn-f>5}&a0v L@yy_"/`FqNf̌}FC>q![o"_1Mi% 3kҦe8~?_U`tY/7tRnJ EYOqɈrw+J&6*pEjx^.B.kK eK z}d≅`|cZ^%n"j-4lbü;Yh-Lv#^ZKZnt䶍r\0IY%5 Cs]PN2呣c{ Ev2zsl<6ֺeچ&-2.bV{C9ΰMNxWʴa ~^3l;nؙwB:Qk_5ݬz~Ɲ?@915o'N <+p<~ou$N@Ƙ8H8)'D 9 )r|VZgM-55sOh뛫+{p,Q`l4gaSuU,sw?P]g<#N^j2]=:c1+H|e 76%j$5tDێs1)igdcutVfZwNV  {.T09BȔ¡\Jl9s*jcAl<8v(Cd(\P`H(]9[yBT#BN3%C_ZgɕVů7]=D++ϏVcnprp Z4ރN_bNh!HJ&! >AG7ӜxwAGo [ch.:KK (4hd)訃(?ǜcN&kum8LxO.dw?=w?|po)oiP|:e˯A)uBzf@r ?? 7~hAJ ͆uی %bԘqk47z",8mOߎwéϗgg>q= $f NMt.ReIG/p{;6M\_K[THyCo3 4"NfG ߉Y;^{SֳL#׫)CABGBYs];LO4]!$'~Kgg;Kk5tB?v ҺF]{<^MMc-&ʗPL((h{#ps7 !7k&c;[L8ɤ߶j 2s~ɷ?Z|S@3Is&G#94=k|7Pt|'-4ZJws"p"ZDn") (t&ֳ$f˰ ~z mV[5@'F5^#^tyy)lӎFE#1M-T7Κd]*p" וO*PMmz2dNK^+v h9@ LsP(H:*X_NpQA+>+61=^-WŖxU|M(Zw!ʏ\|7r/_{߃u4j#*J+R F*)Qwg̰RH0dt&WSQәZmJ ЫaƜB'2NE]!@tUW /Q])'`WU&WSQWZд GW/R]p@{ݿ<9u}u28X_\r@D+i=@ Qٔt%9}㍵||^9>^[|4ڈ4uhz9MTyHrAELŝD(ΐQK(K(^e^ 8Uͫ=QkvkX9p39n~T64̎:<4O?5[^eb+C4'bj`6ܛq)p m\Ӓ{ðob e 1^<@8նm σE2I)b\R*2+UHQዑr>n0qS 7;{;P^ nu0cjnZ1 6En->:* {fR PRB0#@'DLḩblʏ~gעI0gF:9:'= D2Ʉ TAPPdHI ig0mVG#$C3-U+B<%9%x)zr;Yz1)5vEʸ.Q,Qi))nYnsX;rSގ|vQh䊲mjՓ,tQޖ16gʰ8lB%4P`t2~JHaKUAj8_1tܝ'C?s6} ?"k5Bʉ[2 ڹ`73ۤ Q4[)L"Zy-MF8Fȼܲqzp@*M@1%e g 2 9T08 I$j ggUQr+g[N Vϩj5 鳸Vk&5b>U28}':p8%Qiu< ℶ. )U2J% zSF\N/B>V^<7ͣT4"H8!wSK7"I%Iʇk"'uGD*hx}dqR/5L{]DCI Peٵi#w$pa}CQI0ԗN֨zZmG &vE-L 6R %m |PMsnѫRj|ᒴSW% TYoZlqt@ӦEiVn`L/LBA{0Zop31ZLP$P+7 Xf  ocv6lu5G;`μS LD }er4 ߿^lI!NLڒčYK pG MdxI^) RoӮ]lwYt <.UL"qʈ4A,)DHEdYl{7|byx{ZmAic> nc"y-T)9,$mm/O/K߸l}=u빆l|Wܩ? 1jSdr^+O#˚+NBEVmp*}k<=5 ud@ʖnK3v^:9B3 8,hٝd pcP̸oq0}Bxna݅uH9hI8XvȎm}]'(^{ X*}#? ?h4"H%dH Yd;P+BYd9ba f S;T{ ;lu''49k';<p͛MG?G_Hoo~kL BN~^(kb>:{ F;[C>C+wZ| e\򑷌q=Gsy'F_':Y<הB>}F\W CqLL+L6_' *?g;+]J8+7tpk*?ZлEϋM}T45Y1@h߱PWbHQx?XWK٢`0N~YꑯԤп.ӚIp]_k;*DFs^^}wrfaSؖM4:=Щ&sN}o(+Q^>?0U.Gqӳ!/ˇ/VW_[m,<e?ՏZ|f'39+n\qY-:M]{PUkm4iԴRd:N& /zŰ~my?I'fGk%`u 򿶆O]M7l׮v6&)=V7) Yem",$09bZ69)m)eMR׌=R^76s)yC%QM` d"&U"Kmk2>{( Rv32ԅk$)1)) o601X`9tZZ~e@kڶB յ7q)74ֺ[@usW` bF`#h}Ej/)2WX l&QrInǨ洶 c9\,^d"b+! $َFt1x &kQPT `FF%i&ZztYZx[PTc9kFΎr6U}lue"ۆh3Ƒ4Ad[)%"m!6(-Eυ/zRPYdk_BFB` XbjMyFj'~ҍ4&wa51Rh_S !5xR FG6 2q7GűTrW3LCu9"6 i%TPQWIf)(eQED\;EȾ r7PUeY(ى7qΌhC2RX0VXlqq/\E:qȾKGjXsJdz7K!a5d0&)Ř,q3ŀ2ŘP4A`*"*FiFdb޸X+CSv @dBmN"Cc!T2OgbMf܏S{O?Lǁ~s_x1\J;1W;$IhcBDG(Q&e[r:+CvLoCQR#)ƅYuAvB;>K0#O,,$gP_M2}яo@7{#lF)$ćD"j Xp1ӠSR\)cNQ |Q')e@qU8@(tJ"el@2vnE~}ћvmJ$lg j%Aൖ:CZ}ҞlejlT{~=[17y3>%By{F /.D;%Muj9|ٰ<ҵdY蠥}U*Qb&BhTTy{-߽oٖJa=>r])maT}:=m[|*:⯵X݃ 5u؎H%Ҳ;"-NjCODY8-u7Ikڄm;m &BY._ieZ&~D]X/py"dǬL.8k4{L* LѤ {Q2b6X6}0k{kMlj՘s.\QTƅ/]죆o%͓̿;=ƖY`lLD= p- Tc+H4T/)TJRɪ6uduq2G FPRRF7ȹ_cYYs.SܱwZ{D/B6$rEȌTEx-vXIdZAoF. h7 :5EϖcE"Bg{ܸW1o0Y'{.8Qh=#J3N~+ZG(;km塬,vp"lbImK,8m,;ftݝ›w]_86'ks Zu~EdO-7DjοnfWwO9tf}ՕvH-j9m;En󍖫d<^x9o7rŞ]k:NǣfOQr7f"bkΧ?oignsHřl>#NpztsӏAa9ȵN.èɜDp\ɜ9>֯^ ItG7sg #q~Q#:C}J5>gh!tycoXB ITnǗ@}5oH\b1 }qF 3iF)KYr|\-Xn.1,5zOfc D1 &!)s}=?2G-ކKxvSQ80ښ0'p9R !^kis}r E2Bl^"#ZPS)C{1^K߹#5qUY. m,}|gp\$[0Sڌ{&; o:ݺrg:G]=ԅ4,F,:6i0#n:&]HgÆ.wpQr[|٭(eJo7|hXRQ JN` G0Y-Ed0`4L`WkwYɸW <]`! 'L+@Bj]6眒JY )b"3 {C/c~LXKURTm..J{HHX խOf%FXm#ы& :z( 2[6픧r!NZDAhcS9䤧iE0 !c!d=+ F^V^F2qRǔmlm |nX;n|~hՔ@diǹ_r{"{:dŤTVFՆ13g» _12ͦ ^b"pa_0xVϹboi:^y$q.d/ǓË˲{?uI#F괂V2:pkE#Hia~YjvjܡwhLъ[,%K!5 $'YyRhd -ͬ Z©XV*n&KYf-Eܑ`mȵQ5rRҖwKrS{A1Á<[F !2>EjdzlA:7Vh$8:}1ґSbL1'BvB[<;9Hd"Hp&{HQ`H%BImӦ$?4޲VNF}-fҶY>=|$ ڱ>ٛlw>*Mn.&'s!Ac#œ@5Rc?{cKbcs)(J%DmL.9mi:i'G'>K> fٵf[ebji{(6W {K*a*xR  )d)EŪlN+Hy(Tֻ)f` D'ʑ\|/z|\y%ϖzqU%uYizEY>f)3f9«g3+>5FDDƱ@'D!5+JS{Z]ۓٮrV!Ҳ QEt,Aq&Js&zΘVYHB&*|f:D,WNE%! -]0- Q':icIVP^#Ͱ{#OadH8fϷ Tw7+x+˗G뻞gZZn\rbFy/UqmR G9 '4$P:-uڙ |]$4Np5-5wηBȍ&7J(FF DuREg !@OZF7GACPqg398fM3@tYxXHZa(WfXҢId&oGM:6aAp Ϧ@)2HL=.! s }4=#emU{~p6xB?{[&ڽ<Ų88Oo_}O~oO_^kZ? 1B,AFް,`1NZ$f>?_Њƛ-dh^s݂o29%~ww\ -m0.DY f_7i,яyZg."q0 ( +DvO[X̝^pJ/?=''0P&b;yuW7XDɌ(h ~iSfGێ-f_D_z3mbʷPwhPЗų!}ŋY65;-T(۷sߺKNC].h.=ޡKzW:m{P[EVjJҏEn15qI_@{邟,,czkb)2Vh&~?_(l@ Y&I q.ȭ)) t:spTLh.^;dT?U)nIt D7jL ylT I{TruT-Lص˅a^ug {]=ו tOb̜ 7\<x/|Y峙SUiُk?*R<O>F4{'7黟HXN FpovR;wٍJ{t_i&2 ЙBT=op{=ًԳ?wb%&1RKJ"7. چt&(~bjGE䬊O4/ÌKSݸ \Rh'&*d4)}h#sڃ$M&*>D@Uҧ`?jsGU3L (a&8`5 bfb>8 DtVw؞e/Lkb@J6g:Ec:Fwm$Iec?xb/ 4yʂiRKJv{'xa2)le̪x"*fII|YYΪ gO;\R̡fzeᡤ^'c,Ո: ,`j">gh+Ks6Gk|3&%P= O0Lx)CO+@XS,&ժHոIO6\TSYcB Dt0ܐEpd6rNJ}?ɹǁ1\jsn>˴E\Z-5с,XͲ1:fm""4=U"l|+=̣Pb$ h"bF'ȹS ٨}?yX߆bfȡa+[7>;&]ߤ[6y ;z"Q2NV -䙒MJFAFO[vxJ))<u*=+,RR76 (YHv5ftp3b႕X׼^^ˊ<~}2ϥXl%9?#,:1вzr5{ŤС񽶜 Uaȝut ;,tVK.#ٽJ8(z<D %2\\L 1[ ELzx`J _EndE9i oe61x>i[Fň!-zcr>:|Qm8{!\,EP<ޢKn¼Ss}Y9WKL^.`h38\'cGOӔ_4yVxKhY'8# fK"ݘ1h;-e2K]щ>됺X1L_qLwAt'KXF7~}~&+jWV̯"羽M7GA:~gӣ_<&&ӯ tevW@Q`ӵ}ץQY]٩z~݊-ukC.i=ӒiΧbY[Hv=@;yօ9Aǻ!w<%NdzUz%yIhM Ct2LxFl:ge4`W(mIcZu h59ѕᄾϷEs|0лmMX{&F`}zLu..warӕ5D;405Q;wF2։L0ڝfWAʃB@^C.3nQ2XAFvI%ڑ"VRRanS18-1pYVgND읕΢WEw[j3o+dIpg Q6=/_1r2mϦ+dcsdESgBd4GU%,Qy&KV0[Ndf#w巘<$d$Z 1z %j g-94+;]%;}Ɂe}d[_xC-AǷ;,Ns/2qg=\mSq6~G?MvPy2x$B؉ B9fREF&iuIY[Uc޳V, I1[/"NĦt#L U g=ոJ5,63k`!'ʵ]]fApq26ToAIzb"$2ڐe{Nc7c,Et%go"4WElid sQ0齐T:( 6L#%&*DMeĮ6݈xN[P1gLjO&΁b*X4DZD+ 9gA$әev@21C&y 2išXP" RV$1c9I|Ն{{]u֚""!bCĭQ,l 1xҨEgZD EM3'-3h^34oL*b(e r}0TR\d1'@K#J[b~L=[נbfɡ(+hpqkD 2~pk hEL rT7,[J$TS`❵fǡxP'Ŧo_#*t2Yܐ H HkُϔL+~,>Qn|>z)Lg< v?r›7 _蓿߯3݊"_tO>_d~-Y׳tpy9nN;^F x.)&id[F>z:pg8|gy/|2 >-zHcYm 3M^ߖV[(ZzlXu?}ObF- Vjuta:z6/}/?;/ίg\(써}|vDqmz=[r6B<-^2'N8SC;.9.:p;!=zeԏ-вc'>h,Cr*wKwd$WHpwH_-Goh6Z4t7>9qq6*mѱ@Do>+`7/.~nϵk|1I|KdA?r`tQcbe@@jv9G] |:@^^mg헟.ΈJ<nQYѦ/%%ir>3'1$Ђ>Z"otY[10!t mfF!I4nЀthl$3‰(pX )h-H3fEtU`T!F)cQ^&#s9W91qHtA3Lw7+_&_6\K4,) &Y^&d jYrd6<ؠg#4ymȣQ%"OK0<QD-(-f L-!1S 2i57zgό/FEff9YJe٨| D.dy*6g7f.`ġ].RT۳JR /l :;{< aKg90RI38*:d,j0^ #>> +~%-rsm\\͟@oˍdx!,ۉD;Rﶮ{+]Mŗ= Or2ۘ{ ua坳>>2<v%ō~%{Yow?=|qΞiy\c&:mslRlDF4 ^.^ ݗWCнy5Hhtmf`( "$ f١)&UTVmfcu;"(f[tZ(EP!XdK(Tu&* !mv}yRT'A#A Fp>D} ijyOh_1R}8S7%T')$RX7g=P;НIA\ ]6.+A,d-2 tdX=<˚Źկ!u]'>Ŭzb=P ݧ-*hb#;Ɯ^g_ 5|) 5&o/ ze.zή{GގmtkMe+8Ԡ<HLM*AȪh0R(2K^ 3]B(wJ Q' " R4i3Eڿ!۔!dH$1}L y# ;K*']5ۏבޯ5멛zQmk팍Wn{ {mN.1j_}Ā %$5Ym*M庘͗:`lZ.<eShYѨ5::bAYBGDrٳ"dL愍^w[}l'Oܗyg(eʺs$ R4ɚj՗hhc:(P$]&lI.d ~t_iEJ)`}j2gّ*FV*e'f,)"uH۔g^O$ټ@56%^~I5Gͫ<o:WϿo~~~?t_eXb-g_ץȍZ~Yf?0 o! ;ѭwڈn[Z9ɭeO=M|5}߷esG'ꝍR/Fߝ8}u4uE6Y#S£898#,v9MGq+"EƞK5}5O/hfP!;qv/N_<[+o4 ,L]:+FEF6+hm{o-ԅbk^Ry|Z¼6yrȗXw{jTMſqMk?8n;Y?{_XPZ#}K]06/1uu{:ԗxsw+$%[ʋg{^6t`ȳ/K·C8'U]eB_Ƿ ѹFL.Q|Zqyz.+03_8߿.:u]UK=v;TK܂g9NJfjPを>Űp_ͬ&;I($2IڐO_}{ro,];E=PTQ:JN2\)LBYɂ6cT],P@ pɱ`I x4$Ik)hPLHgCRT'KQ^ٜز{`+`,H> yXW_Ce]5WhaW&g.] WIQ12F(FG, aRx:lG)e[%-m%l)JZӡ䳠,ZQJt&9UROHeV%%p.:, 2fD T&y =Bz-09YU:&s3YrZ=v E:\aK{ 5$p]K<݇j<izz\X> _E@]j~hG@0Y)fM.ZFJJaG*dV;jߵ눾}rVkHi#,JYʐU^|̡Z&t%jPl[0`Ql}mTB-2 AR̚0¡Ǯ7v&͌bh$}9*ym鴥U&(rMjoXƞM4Zs -;}EAփfO%RKNeUڧu0hng0Ԧs=`x;X:3$L t(hz!I6lXQ.|&H>#LN];Nl/WD4iQ&ZHBe1N W\ Y Oc=L w۪be뚉ʧgBl" h2eD ,y[◬)ِȅAT8PQB\% $S9*yT!Պǃ4IW<9쏱BdRc-2ڳڰ^cKUrWgsIZO&fZF/DHE/Q@*L^ f(H9?.lm."²մ_/V\ʷ zԲy%";q5&sґ@0>(#cU& ~qNƚx}_vPA1z65/x%M{I3SHnD8Y$$ &l!s7%)/66/bRR l hl1RjdP8YQ/qD1i Лui;h-LMT PAaEsZK [.N2% ޮVdKq9SLd#*XQX4H"SF2sw!9x(*dԒ(Y1ELd܇;G:g2WO; _Z;ͮk)LkMVv_`޽.!XK>v.P33YfG(.#/ȏ3~Y5Pl]2Z;kȿ;gF/?xqŋZiOgN& |g3o_s69VF|8>G<GO&碲[qy2z_KEWU\+(WUJ'zpų#bRpUŵ/pUupRӁ+ճzWX*pu;q >,\NZn%_p+jWH=*0Uľ {R•t\UwU։UpGWi{WU\RW,-pUzp3}q)6ӶFϖ7ڤMz$́[ F٥G_gaD  Y?C%h` qo`?Jk0]D`´q\ڈ*WUڻ Ki•% OpeiʽaWUZ{'Jicn {U`?%U\c?w?|Y3>-:rcq \մ}#W,rx}+w !7}5\J`l\NZ@%l{\} GpU+WU%=+oI@ >l7 0ݞ4x%$1Lm]%e}JɌ` ~ApUbઈk.]qWEJ\Arc.H`!.]qQ] \i_+I:v Aot*\! p1pUĕp)pU5gvER"\i֜cFqAa?%Y>(kս7uGWI@S+RyI;=G/So4L>a֞_aHAZ>H[`Q3WHJۼx=VD{. zŸ۬b]NjCo?oTKY']w&~<(i*oQwLJ 6xOX\4Yi t$``RZdNzFwoqiq^ZBj %6d<4%m`/Ɠ+~4G'yrEZuEʵT'v<9řbJeW$RHgN^:[+ͅ2bઈ/& 5̜;\)-•|-Lf0z_^;o5.n2-jI[ߕʊ?bчt^yx׶U[i`?ߍ Ѩ r\%er6 =TffWt\q}"Ev}N ޿+n^\C]oBBDbtJ;fDz"MI [P/j' ;GYl ßGj`Dw}]/_Go6ǐU4?}|H`/JH]BcZI][0r֪J rf>'Z~$Iy338iTD^~ukq_UUq  gGȷl%1#hkɲRn-RԖ¼[˫aCI\_+nL"ųAɔ:%}څ bȭ#)hQD qk2uPFkDH!*WȳWP!Y KH)ZyR9vCԂJU}R!=j@\wWI}Bbi~g|$jS؟rWih-7Y^>%xdK[Ty2):Ǣ]l'Zfʛ"/DZ.zYJ2rYnCfRJ枑C Z[[>l=Ny9E0*Adzڔ4!$kS>QNh. HԞ+x)eB4L6)iO9+*Y5q֮rdoaKm c,IԙK˂L61M8,)%Gk|g5M >_[ 8$$q4(NYMV1wT" A'u~x_Wm"@+ `ϼw|gwXTG͐!M#ټ MHY} pL~|"lۀpc&w[v: ZxK @m}(ڊxOÁ$ 2MXe~ /r, )/ \:k"BO==㫦"l|fa|QU#,7.E hN1 y!9S ٨=<NJ5a4r.QMC[:[\57wDMIhb[2h~uײm]:>' @~ `p?x[) JhӧoGVq6p֟}Z݊PzYvq v6O ѿpc5+%lt?شt4L{&eMdfqd=՗4}t>=aYr3W2O*fpM &ObE[>L`X"r̍\vF+ I!)'$(YDHJva/ftpN%A,GH.*#ude*O ,[r F*XGP[I4bHX)$NOr+ɣD8TS~JsqA?|Y3Z=v"AA 'k .X9$1O)dp؁֭x=6icQ9@lK#=4 7o-N6|]׏7 Ʌ|c[ܸP+It 9[gjWܸc̽3G}-Ymvth.әI;ft 3Iк=Ӧs@#Vzpߍ?.]5/F u J`&ݨ6 {3jWfO4#F^zMܢ }:W?7~> 76Ã?9u޸.,[u*. 7LѨ'+Z[=lkm}DOz%Z[6ʹAq2:xEǺ+\.pyKZ.$IAJW!yLc"$f ZJ [q:\}4k9w^f^?1swa[v{%ղ:ջsjM*0mfH[ s3ggRьQ9 &yC<`NJ'Nۅ-ع@T*H1FYXfQbR9e;6Zɪ\JJfnlJ#<@D\ gW%Z]jh_vagʦ'wy\kR)6C;p+FfY :iW;Gg5qaԗ~L&yt-82"B"\Ų' PtV;@\^jB' Q}Mʱ!Nv&P f4! &D:0p\ .]T.XoJ/>-[!X{9~eF_8_guSA: hܨ?M1Xj% /H CNzb|}4O-&ļ-_ijl]N|}.3d IL38' ?*/EV[%M%\|=plk+xՐ٠|(٪_t'UWeOLw++#GU0~>c'0\oLwE;I?O)퓝`-''v]D|>uF3a|`` # EtldV Dq|zf;j-H3i (+w.ܲșFT2V贊=+ eD)\H@])h)%![ơ!6OjTU3 iVbo*FV}trQN40ݬJ^Q Sh ;T&( 2xv!L> Vcvìy-^*%r0C|\`GbzV讂#r6c~cgmI 9Zr;Afx`I?7_g^ebG7Ȯ1T)b+t@HmT3E"w t,5b+Uj*pWe4l|/n?S} l%$*weo r]TBVM}57>{+tD(_gOs p̬rUg=(\۰9% jFs&jd-սV|! s ܤ, 6B,rY04\`D$4-JnjJuv=  )=թ2JFF pi, p57b8LYXnԸiv*.=/{m?cX-JѪ\ cƖFj! ]}Vv>{</>l%Y 煠Rơ'儀Z}~i`'R PǓ\Qar 2 :m;fߞCHXBBzQSRL " jYkLD"A6ABsk7r%^{ ȓs-ذזzOrX֕>h! AL(nԵqW;#^ H(y < xps-lg ?~rŒ՚pX)L1N#Ec@S*Sk%u /h[eeX𙪤7U]fͣ[DK`[1 T$ qk #̙žck)IGd\4*k@QAsglA%*$(1T!07Fg}|FݽV{|`r`Zh5&5Kf"D5 %^my1 ! ragDsM44ޥ~kYx,MzΨiz;M; KF aj$1w: 42 .H +nA2kmQu49I 85`BL1hY;fT&(6 (^#ύ<]{LkߗfϷ T׷'yAW7ʊϧ[zmŴ̇a~rRxrm'ְSb HJ/ >zLX#2KKYpqkNTxGFǤ"9%@Ҍ;$8"` (O xt/{*? e*~uo c \P lAMG"b*hE*(/"˿vkziFSbSVbaL[32:1LE qKj< }1N,#m9KA8_?Ae#Kg!Ǭ+e1d3 O?ٛwoϏ/~}o?^`.ξ?5 2R~lį3*YJ\Um\Hr 5_g?4GM5 M4i.k+}خZO-\MDjSz8uϿ.E?NҔdq2$qLg~JVUru՟XJ)U1W,KHΘl'ߕO gtis3"T2(Z)θP$fMfޱYe;6 g^Uby[h9]jL3UZi*KfvtP,쮜KKQUfuPϭi_Y_j| xл1w<߂%w#`s7( jctםPc MJ︊2 ~M?X<+GvO2 }k_@Ŕ'0w+?)|JѹRjJۤ[[Dn36ULzr8@yv/V92K`U#)Ee~h@mv8H )@qcQG1xPRA@UVk̭ETnU綑Lj^} y  ^ p<7+<48xQ#^u{Q^UhlMlXfW vms ~XX<쇵~^Wx]y]ap KWm=z&<ՑrDY*p%W PߑK,v)ch\69-mDwJRZyvłQ,hB4F+N#4VQGCJ9ိǠf8#ropoR0#4a9Z.SG]&3Y@3dx$s|@+Cys>1-^}!JB=`^~e/\ir2˞F`UL?~nmi JC&˜9sEI4)3moc{E(Eʘ[ɼ7!M,)A&IX1D7[X`B<UXFj;c@&ye4zl5ͭ#gPE>C8u88gmbÙ'K*Ro GS:zdq2R|5*4Y̅Eͦ) IMNaFQ;7 qfI$*Ű[)r 3jc<ΨrXS.a*33ݗxɆ⳨xf-SHHua.zkrTjORRdi/ؒ޵5dB{e~qflcw{;a;yh#`Y'ɢ@((A[$+'\J6RywQ"x@'Zst"d K4Q9t#ryf%ZC@H1\rF>jK`Gz#32`̙ rKl-al0S.Q琅 pѢԷ]BM=,페52qU̓?+P~x)2&8"qO͐Ǹy\*1H,Ƽ)̥C:FP& fh/#p;02D@FH;yv8K\vJ^8DĭDl%V-6;,,i.0FKc,Ey #pYW&op#7RBb%#1S, A9@*0I0&w$l-oL8)uV\d_H3EV.nhaR-CATD@(% EMq"Zx x6\c_y2C<<[M+3hgdԚg ~IK3YFpmB' aKHcx(̑Y# -fZnƤ5s>+qtFf{ؑjK~3GhHIT5VQ4!g43™7yns9==,u&Z;NUڴIr(S#3%kRu5Œf_ƫl]@JOYp&;*]N2=}4 d%nG5?]y*֠楒x8R}nlϪ{nU]Y]-KsyѦi6W e2ՄlC? e/[͟SBfI5CeG>T3▱ŵ(3PxK6E,!]qE$<)A~oD-*D}=M:MnPrj9_^==r{ߜ%3@ۥӳAMLryå O=cF@1b"iZ+ ,$պ Lېjd}^v2 ow}_Jy:+}o 3rP3emRwͯ%}S^uz%BLf X&|xx{<2f<> XKpTj0>o3A=8=s=!Iw@Uq1NJxԏcg-@@ F/pJÃ&̴-7[KaK5r]\ C!fH N~Z{ =\TknlDukryu~ge\(m!3A*N d35rtBkLlt~q:IɎWΐOO_QBB,ZXJL!0"~ ?Eܳ#cáBJxtkf`u*j D#kp n4d8ʼhXHǞ "k/%%\28nDO hVϤ0}0.Xo2/aN(E GJa% ,kcjX>5'c5AXP"1ZjEBüEi K)7j<[֑aYqG6I+)$170֜h] B!.mx19jŠ}}sq!)kA?<<=eTW݀G`R}gxiҭKa Ϸ60DąNWQ2ĴVKYIYthgZOuP2M7I}~_-mJ@&=,p|\J@ ~qa=* )@U-w/^テ@8]/6٥Z0+]{ٸ.l`_\<iPXWOa {-!ڙ@q${o4!A" ĥQ` {nwZ- ;XXHƿ+ $̋co4=b]2~P<xTI}_Ai ;pu*8^>?տ~\m}6ĺ CoWKƯEKO&ݲ30>@V}͠8<*N)`ƓBQa rkmԒ6jəF-AH3ImT@%P Z8o[J HbEB K~l"Z{Z8òY=6Z#ָ8bۇ+ϫ"?h~"m $W0OHK9a -lONZB+)imb0uyNPEޔK*LjdֆDꥦ'ZH0<)c"ҙg4 7o|{FNqb |xveBmne,VPV)g#Q@_7ȱ% B{$f&e)D9pDL2f1EyE˛C@i߳pfrcdp{nRK'~yo,z)6zt "p͔^֜9#5 @`#*=Gi-W˸ `x km$gTR@ aȻ׀(Gl%ykdNŝx`m%f ggHSiH4H1NՄ-u)R[2PT/8 `ET< F9NͰ#˕ !2"P^V<ƷV[<9& tMp%,s%b=< b m<'ui_mE#}fq^(r (F;_0M*̺#8D\>߮_r k7ͺ -jlS8$0QOE]n̾@nOJX]4}zLtc?Î2\7e5sQY(Yh' ɍ k21 ZQk~{,Փ&coI(-V+TWM HޙE'n֬ _W%WV)Kdi .}iW=HFm[:c2EM6͒QK#82[_IUn[Mgiag؎d櫥w9^O `MjPUMr^a{[*,+}cGeP7ۼUIOfDM*/[^4 *vXojDߪ1Uc,GҞDt̖h"DgfEOYl$e6EĹu!XޛtMx0/riqVG彡>{}W˅">{V[LԏzA`:7A,V-wa3$w}pgtj{)e:4+LtvW~y~roM^M?o -{653S*N:'5ƳR0{wN͈f'#8ּ;K$[O!3E@MIXH ǥ̡RZpU|}EJG5_"`1?v8rP1GoOC@NJwpXs:a򸭝kzu0VIJ9K7ȅu9 W3%ՋJz/ QKfljh-< /\2',sESy$~]άWs_uo0|O;1|gz7LfQx7❔e v*)j|mnF/[cEU].dR*ѐ)_cJڴ W"˜@7nfcAĩ2XS2IHGYq-$7ZBHFVG*Ƙ3-Lȅ`I*Ke*\&<'ϒL&ΆEyG)EžP$9־pl ݅7n~|t' ˥'-Gn.3"So<!dGH9IT(6<Aj|e-+uQ+(Ma i2NX6[IܒF` E@#uݧsBc? y ~S+&?[ Y0QS:X*Ɔ & e% Q5d$a miXK&Kh ٖ%CRdFT"ʐX0' !mTWr^:)-Q8~4O?g>|/~~~wpo4J% JFǪWI􇏆6{B"v 5g?l޴fMK6iZ|yoҮk}q]LJ N~8}{%%pof<[76 dIE!O"ժBمfGm[1yf˭C%zܿ-χp -_/tyLCލ>UAɿ8.WVRڽ0fw"^cHM֕&Ӯ4ǻu㟲8~.\]ظA{y<ܲiI5d]ޏ㞚7B_dsDfXmq$!R;03x6N%r)]Pߏ ˷`^En198+t >>Ͱ~n{Ykb ׉9'5X/6d? ߟ$n׮T̂!2ydnbكL̘IN'A\ȤJ\#YծWǨhyOBjg0 /l5HƳԾmEUيګTSq[7=y^ضِ}'+t9Yql(>) +]-{J<,e2ٱ$#%T". !x5V5.nϖ=ʒ*eIG9hN8s "K#)Eq&A6jύXfр#Dg9!:4Hr-Ӂ` s7[WB6n_~/$,) a&9 +iXff`!aAջA쏦hԜ{9X@ e]Ҵpe!+ݝOi:|vBr@uJL*-\XĜp6N܆u8ǔzGcVKDHyE03|"'1ks`*FCNX9[ {l]gnm-7Z8QV.ea:\.|:shARN).t(]P\d7;7 @ɕݕ.V߬~釬'o)#e2Y`Bى=רH& NwnTy A:ǩ#g*AA>F1L:rK>y.V TgW 'Y%pK=Rs;÷\Ec֛oszkѵigkqlӠ#ϸCG8i/@##qɋ(3뤔SH)w%) m̮r;Íi`I-Q>;h=wťվ:? 36;I8:9-k@9WV"$ZGߦN }j`dcx [p~qo(dR2ܵ_+a@[+ %u}mH!-!=OK#O'(U"ER2ՕE&s's_P4M+'$WA83“NifBDry,|`Q.℈l]Vm8 #BXsnVQc%WJaJ$Z.+6tBdUa8ke+[l,~6 cTQWYGA6PU\2 '"L :.05!J-H&96K0ه}юj3+kiYX!<Dt-/!/R$ dAHhdOD6QJ:&l&l&l҂l4M5I#4I#4I#4I#4?M٤M٤M5I#4I#4I#4I#lB6dFՒlR2w^ tܗtѣ!?f秓ǽz\"rwvƸ|,>3|I`4YMS<9&w)#t@CA* W,fjVdQ :1ysb{^|[Dڤkt"ъجQsq\euÿrzGCU@|ɽ?!K8`h>뜱HYD>G2KCMs~e{Gn"&]H9!Y_Q<~0xh`1P2}|6O={b4{d2{σq{8=y<8@RTh,gRVmEo[Vm<[ViJ+z rm+zۊ6]kEo[ۯbK,EYGmhe+>J~MUKwLm1|\׈Yw w*`oP"l3UW<Ӓixr$lL%k(~t:Ɲ N\ ׷d iKᤃ|ZO7N( Sү*:-A9X'rp6 63Gޣ3dUJ:ZRHy ^RA)4dpZyfU1 c.:pGeLbd"͜39/єޕِh~#r\ rh'oY]h)U1;%0 @r>'V,R&9jP?&2Yy! Pg'wo ")RKWDAmN-q6y4qR䊬6O+RV)z@u+÷-Ns6{#]\mIŁ>.ǡHUd)KЉJ: ̑bU+xMbڼ|қ; ''O_`̠%Qgc6cA"stcDܐcZldV%I^HlJchT'ɷcɀd#lsŎbM-ZY`? {ȝ3AFƤ\1 \TÒ#"򒠊&5(TGfpulNsE2!C&9DK&!#P80KZd!rц s=b4QP+[D,bF| Dp)5J, !2)ߖLe"}7V%A cLV\LLB[$<*[jm߿FHvq͕kbfClE.R'G?9e#@1#AddVpDz!F]Žw֚=;0akJ~[E9]"MsNy5>m5#I dke^J qd-3r1CO["<0عA>sMD'GEtBޖGpkQq8rtb돓CEmv0--Yb ]Ovy?./.+m;.Q$!D)J)NXLb UB(%mͫ C_zųzesFp6+@!VȀaݙwgh1wnD`{2)E k$\`R2RGK[O Xv-V# [}T"i A=^ԞvF#+8H1. qRjA]|+~\gleUx.\Klޅ =]NL2Wj}f@s1yhPҹtY^_= hvM2]\!,dyJA %t<^S_tȋ>o -g]5{Nǣy@z}ΚMů(s.Oz|S:seߜ }sN>ulk 7CNH(e •R*KǢȇ:{} 3VkUBc0;!)5Sk m‹nF,B0\J<&R/5eDDL ` Xy$RD;;#gC}j+',PX'Ν+fWв|iN2 v}Zn@ L c\"w"sZe"z#ZCۜh5ZNIuQAsgцtRAPѯs9N{lK\6s\9)*Wݯ{Y߹gقh^(ru4(F/IQD<M[$r"8 K}wRa]6-ZdvS !Cu#kn`'<9K]($uUFӷ6aZHO rNVYE%(r\X s(yurW7W㷁P=,hgő`:k & lm..*Fj֜{C74{"wA(a:;Ɲ3|HIq~-%Usl !Әa|T:O&)h*1Kb91r.q<8;9n|z"n *`r- Y_..iz.@nObXe[,:K 0O U&dS wќ7{!R̫[WI\q]XKq)x^N&=k*NK~0gc@M8\)(}N>{ҫ_'fhFCL~kz9LJr0|U6hdL,e3g#|l׏I5Uo\A_s? C`SP`> ^D/tA"'8L㳱S˯Y37~a51<{?,A^ 4w( *(Qy#b!Fa9og﷌ R(_<~\|,ڠ$-ed K=Hxőހ{ŏM[z:>M!-O߽ћ7/~ K\H»[S.o rp1*_ZǗ!њ7eEl = ni@~6Wef[טY&54RA 5UܘwM_MDtߠ}=rm9ݖ $Wyr=4NXeJPbIYRK9/X`(C; /xv;; 1A"eB$1# 2&8oY$DZRWz(ā.pv>ٕƮ$_׷.y]Cd\r2U= bbOٯ.mIQepi>ϧ{Ӣ6ї;L=e{f s<afwܻr蔠z`9VM'ح;kH)EԒ)E 0>+p}6a&tcP \ \sayl )cR;ĬW逰 O& <%v@Uir| 9F|y prD‡0`|ȮAQ, { ijnތލF-a}U%>Y `@bw!:4%oviΈ0ӜsZjV,h){JܚI4)+-O?v`JYKdRbJ)h]L)aަ;߹g[cƒ@w*j RjQ87\Cp2x>i:%jyDpݝ5[s|)ҋ§>#>{E kaLhB6RUG.׀ _%$t#K+nU&Cŵh-28Yb<޿y FI-y$z&D#~ҁW%7iQh`I)D~Zezūjy53,{Lam ,8ph(<48"0C >j<[#"{;X+\$JGƮ ȹ0vBeOkvK+AS)3Rެ^>Z SǏf/8NSQqL-`xR**L鑷A&İϕϕϕR^mT@%Z,`5&hD"A؉\Hd;+S}[Jhl"Ɂ]|cDld,#r˿UHMX"ظJ/A;LjpfMBmbX)ww f]۸%svh #sjqne%]05c LklKD)\@3.hV2XrE؄ A2 Ǡe5ʨ 5^#:;#g!D^(l,˶,eM${-ӲsʍRImL.$XâsL16&|40-Fhe\wLh|ዲAHPӆu;p@%1dCP4xZI%HD0 A;  O n*qk

VE!xYۤW\|WpQAtŧytP+IȝLNBy6 `G>7E5oPo-ƕK-7kqFi5bݜ?6&ȇaWv<,eVدy4H)ʝ[S3;i<ٻ޸rW)AZ|*|$ddඤHE{jjIYݷ.:UuB`%r`e}%+$A"8SZbB>M&Uc٘رƨtmkU!5\+iε5س)늄YW#lO>/Z$\%MŨ% P4_\aPM(#ptJR|B)e;#%-dK~)(,IB@CgAY ҂Ls(Um_PDh,CfDY7 JiԮ4)DփduiLs?Y\9n *~yn0w*=B]K8qh@ZAMZ?'0gV$g j_ lO ܵ89PSԇTVu|`^LB$>Ǫ>}ӏfk'[ / OI~<_il*ĸ1r` %}ɑŴWUT!@.m7 nOL:tgx?8^˽ZmDwgh:ض+,~(,>_fjӱ߳Fxݴ~Y`8j1"0VubcJ]` )kA!V!Vs`ASjz/j4W`WlN{UVUtzpNwH  \Us'ֺɳjFW$5YXp]m*r(կg?nՍtSʋS0 ZEorĭ9Y~ RfBUR@w靟H^x>>Ÿpq>6!/Xcf'>%?àxaǼ8Ճ^4,z@JCÛξuzz%VE~8x g_ ,—b\CA{~ɀwvgHY,lE ,VTK ճlv׽lJ4^YB  lT˸Cq@x8l.Y8H8'`zJ׷^b$gպ`ચ45q"NJ+TWW aW\+(WJ^$\YFXL@/\8^u16'+?KW|:x:c&3YɡDk*E"pmP>6 E߫H0cI0Jy"%_u!Rh ٬_~*VJfr.΢GhT $\M 3@.jV&kӱ.>EZ÷o.TkNmkWM;fC@[`}TXc78X (6y>?vOv(,l.{eϺY=g]˞um.{kwٳ.{և=g]˞uٳ.{eϺY=g]˞uٳ.{eϺYY=g]˞uٳ^K+beϺZD=g]˞uٳ.{eϺ٤˞u"72bDvrg;#ٝliOd=tה>3Be.ˉ=ԉMivhܕzu^|j٨*ve*]HN{@EeХȚRiSz8m'[^)n= *FX\JϬ:CсJKӽ'2W-:PZ_Y`Q!SHRʩ $F3̹V T9_ [%q#rɃ9~Ʃr W%zFрb(3FAK6`I:_GQEԙG\M51tm湫)y|#"PshoٽLE6ˬQ.|&(G}v2fBtsh9(^1yAHY"0;GBU1 ڐY3sgːVYw T<3K۲Bl2$ɔ&JJў%dS ڐCw.Ƿ2@~_P(BeFr*ʗ,Hh~J2uU:DPT:jHI=NqDà:xxTxd $r6'heI#(a:f 2whk26ӳOSf#dWA/qEXۿ2ooDz3%0Kݐk*p]\jwF))`(@ABP s |=%Yz㞆!=%vRΠ@&,\ZD3K IfQ'%ͮXU( qP E `#ycӘ!%)JQ9mJh6)٘63~J>;O-[^x!'\$9gW^Fme#(tOG:9x%Qdt0+X0YI8d(V?/=w CY1 o(XPz $BN}!/&Iʝ Lǜ. SĤ%$+R&W*aSI A/IvEE2o^Umݾԗi9A|By?2}trŻ_XV>Z/?ڈǵv󾏕F_[y//Fˬ.r+Yq߭/KMȌ!˻%\_E#Uws3GlXiʯl)c?~zQ/Z]3O.o0/?/&>,F!4OWՌGkSO`\>-ÿ|w/VsȊ|a0rc\y@qf,:-Ut̃n$Sϲˍں2*ɫ `Nd71*&zUƑHuE,Q] $$AYH/فb JVp"Ă6 ҥ|yaEf<" @P^n/Ysư-1SS9OWKr*[/Xɱs̑0@Q:oNyȯ BXAM>&R-:xGE *,sTZbʗhf=ߖ:^;ҦUkjq1:|;p2NXz?=pX\X ׊~*-uss}4b|ȳ7]'K~_1xFHỳ}5}in7quo~;ywrzu%wۇ ==ۏ|aΝNQUWkc-~o3dۙ31-ј;Hh.ܒ H ׳)#g? (ָ!{hQQKKMs84ǝZw}cL$bمQI8o#1EO"XI }&0^T(=ͱuL&姥wVfgmH{~ȇ$8-v2{O[YԊ_5EC-Ӷ`шbuwUuU[@7p{HnD*X`-&6+BAgtuhOY[_^G8]Go9]Sujqd#`7-׃oC2r3=LrasnCCng| `E9.!)ne /Is1ж(u=9j&&INt2k˭_6+xr_Mupr:uQ%wH,I(Px0_G 9-qup+Ot0W_obk[tV,|X15GiRJPd*ՍM9j?U*&evD Ǚ-$%LhDU˖U. D$ςԩS*"*e!%- eѫ(iRzĥTL$А f]8 gb㨀 빱1$˽L*$ <#<}b)aJ*Sv\glnK}2n)mxQ!eNZċmrp.Oq$Bm?{KKKT}!t8zCP#\B}@ա"i-Vh.mHpQqB^o=M!D:b pp#WqʒѤ};CY,H,jkh(5\Z_^0;d=)pWUM*&}:r-e֟Ӽvɐ~Wi'1.Rr(|ιH٫``1LdJ96En->`CmK%Av9svvy>*ڭqf-hiFIOY& 4*ۓAD"9PR9XV[\W<)ZcEEJt]3tsz+XzqS6<y][/dY2C ܲ,+u̧IR#*KzV例MI㍵vWjG##N(N6uCZ'ȠbX3eXDl.mD@@@UD+&/槰Cifۜ0YlAO> kOW<{Ϗ{JTl_|ͱ Kx}WjE,gPm\HydN&-\0NIDZu4@8BJRx 14^t#xyEZK&y 膣#DvH8!wSK7"I)W ʇDE7'֏2ݟ*׋;%sT3G"HJ/QD-.úO4D% \_;{_ì7U5k (Y8M G|hLѬt첬|.^ K8?Os\A>b wp8EZ:NP4GLaq$)Hsr"#gp〓\_wԳjoP}Nj s8ɴ,}9O~/O~0 Q:M8j}] '7Qu$dpi?ݪM،-˜`mFh9J- <@)O( 7ђSM 1^))(տ/}igvAW x>=Z0'mHAcLLRB/<@ AېBb YZ2RZ N^@Ί*^&~yl;[ci|' bR}k½U\"㖡+c-x*^V/waj}/wAXj}@%]uF#2E\el9wq֋P\1#$aF\et ;qkwo~y`DfNhRD#`0{y 8ӈ0<+1y(b:c錥1 Ŵw:3Žy.~fP!7Nu_WFE"}1r8RS蠲O˷\2t5Z~x U)g@WMrlJ=pٳ(19Ѫ)TMUCw٠|ϻ~D[f|cX *KIQzxؔ^=Wȋtmz9.B"pDI&k?evr;#\;:uԚ;ňIft_z2PʇPvo@\P2x'FZ \:H"kQxsngU턃{>;VFʌJd/KcjizcxڨF5ImI\1Τ! 9R 1KxD32KYN:Ftkx|:-/)5A{^jnv+(B%& "H4aD%H&th׳@,pZ-Kr&B wtF!rGI|NYVoGMpHQY:uZYዀ>ereMO}yqQ![9dYOr sXͯ]3"֦9?M.4_ލźd3:vw2-sDk׆ 7Ԙ䪗 @8~ ѶڶOj6W9wux<^}G] U2p"1T.szwu+]@'k'\dUB<5Ao#Gm<'Zy}$rs:OfNzcR+D ͍@I 1jM-ac3t}M# VUAkˁ)Z\ UŤl(s& *Q90cotY y5?{Ǝ ˕I2.V%[}8٧T $)RKSE$L)˖U$bfTJjv}͖21l@rA rr:b\HM2"kKgOJem_<YXI$T@EovlH(-M&XoMYD<&KLR rԣ/L+J%L.b0;Vy^IvhTRT/\ewbi/#zY+E/:x񕭻/tST9)!jMQ M6*iFU mYb{H>h0b+A-QV&؉l9StBlO7^nisWfԁGƢ ی_d9kכҬF6L*۹9dȜb:^$޷ӧe~Zu>.i,c$*] %zdd9`F{M r# ^] >fB8﵂t6.]#t׵: 4@6?ڭyp*Ԕ]yH3yPk׫<c~,`|gV?ԏϟ_GF㯣 u؂׋ 63F֧Uܸo,GF⋟?hke}o=z#lh}{xu`#7 ^]0*t`Ld±#}y;@&kAT3M |#X4A"k˙*G$lb? S7v<fԠYu=ZW8u߼'=qIeJ#?? ˬw?ⶭ=vJI k r̴rprGz8}iQOĕ1kO\F<[͇ؓq0*gO|Xw_^ĴL/$#޸3"j8B55eL>\fHTͷGg!09sU$wWFWl昇CͿkwhwѻɷf+|o퉶R}~H8״2xM }fj c"iovit'ӝW-b bIk!:UO>lw;@U.pr Z2 -,Ɂ-QdQ$AAxSPImE yM!P0i9lMQ^x2LR#$A"+R` :v il9{n5'P8Zrr{>GK/iz'xtF% &B6sEDe&WN(tl$.(,JPBʦxͿYU)8BOZ-gӗYsף8+Eq:ɁuЊ#2M'o粪;,wc-h~sYzL:.E{dk?VywyVx) 1: EY&Cv b!<E2Z QA4)-G%b$_ DMY4 ֖r( 4$@[ZL-£k\Y7gGTK/MM"Sv͟p8:޽sTf$#5\fL8l̋J)ǨebZZlu ERB#c 6ںd-S\an'A)-)pn#Z'YtVGga6jB;d #,ZAA"a!H*ȟ--ƺ ̐29Pɪ| F Iuvu?fmg\SdC-lleo{ՋbD YZ@bpKF2,` QL0uBM-bR뀹1fa@jg0y6[tJT4H5ElUGlBNJJ]T]VI=8T x( ԶFqJzQ1evvdP{>y]?.ȝ8?:'/#&2 6?Cr2gemysuR6" 5į ljr8uڜzMJ}M&>#wg#识#[5Q*t!'% JA?k*;RAki*H{@UGGVaϬP..Z$<b`}|PSjp2a?p}]lXl;^OȭKӧJ;cnZ7Ȳk Eۺ@ȄkA]p佊& }؛R OϷ^ ۯv{"g}'@plJ{HFj գr߱ŀfya}9B9-C=;m`K u)IQ5:Y *bSD"ʜV/m\I=! 8/B"ؠ4)iU"ȘTcMԴwej[@ƽq~{6]aSyPY U1ƘD]b.3ShKsy{m vJkڽozCe!;׽1x.ޏP#m4~l~q}{Wg!ߗ<`82W X 9!)utQuOI P_jkڬ&F +iQdiBT:2F JK,%?1ĤC$1 0I$f4TY'ti -g70zjN !Kzݔ%v-aSWg<:I Zn3~A)2gxSl$:8ě O=K%V(7?M⍗NLFӋ>MnLJ+k;iCgf}M|s7BXT*Hg%hvT%آd)gJdF7=imBZBw,{HM-aӮnMɨqs37bՇvt[xLˎosS,.eeST |2]ƹʩ=O b\lK-% R>cN>Qі dv_RP}WHceT*G$l>"Bcl9{~O0v2;s_N,g-:HΆ\w&绫PDi7EUpm)Id Fe1BDU>I@eN"5)1d;YC9} sm/9J2lJ"+UIuf٭kq;]60ȇVz-x="o=|#fY3J3on[ 6Ku6&޽],wVB6<'xARx8̓.=8B*nD2)Tf 6lʧ(d jjsޒL=o>omJPiK^Dyrv}zȗykgokKr⿿bg"&eMH w00%YF`1EJք"f)Z*[X* 1 Co0`| VspIZTw"F ZR1EFM5GH[?s`}_5*Ǘu}ݕ>7R\|dh7n6JK1$H4>b6ֺ+"ȅuy0icxƓ+MF[fyYPS%V(rQ(ISR+Z/QTӰ2y zf8.r[/hF9`'J|c"llJX]$Tú.D$zo^7(chAg-CCC5 \6R #^x1Ƣ#{F&XG:18BF>kE $04ԛ74Ev^i#7[BYˢ$#h]$xEmk+K5 Ȱ)ܿ)Z*! bˠL7ox4r|q=*ML/JLwɷ"m?V^2ǭrj0eӢwQt_ȏ1/[X=io9/;MۼCdw 9v7gl[Y谓 濿"PKܲXAbwbXU,V_kpw'8\(d!|POOґ=q f>W =>fdnJlՔ4LoxaK  {5 LP&h5 ӷ< kpT&qi P#w92p$wOt٦&fXv~hlbe߬YKK6RRHFK]x4j[mVԼ!w*WR{&A>n ckz̦^?zc7tx:Kn Ql )3U}Y 6`"OJ*HUcm *a-mh2; 1cOB>ʆ N*8gvn4.3^\Ǽ>f9 84&qEۢf+b}sAUj >MYEwPt SsP}(n}e){سg0siY׎;_9 s%d9O>u.75 ٹ7_f:S ^W%Xٝ;8{YPg 1b V;6uB G8-L?$;=+xSw:9 G ܤ$c#,^]usbtL<DŽq=t>ѯ:NǦ~iZ2&>'x/3EnE]~]DgiFϟG_9$6fgAZ!aG@D@MG:\o}ū޼ꄿ*]7*,b{u󲫶w]iVOa?oHXxv-&[\\u3- `QPp _˚+ԨHlUg6cޢGs2)Jzj|iXδӮ7@>87uדNo}kgyً0Qg/OP#l[sC1!(,+w<54WMk24is›ڜq,ոA릕#sJ}};N0 >ǵ?Dv}":/sGiR4\TPSG/`{u.kH}o?/i]5]:\ yXY_faMMz!`thӱ36CG.,50UmxLX_WSӺëN'?b<+E]W4RS}[ =F~w>(le0JnW1K 5]X?xw#>˛T/-tW ۀ7#Ӏ5dz6o <`y3xڷ֢)Zy#jzks=Xp ,oS㒭R%}~0&V}6 w Wl4H)S+]#3q0ɲmA!X n3Ҕ(d΂(4TeY5I |0GR*q`?w+ba,4RZZ6VHDS_g'V(k&6ò;ol9|e¤}u8QWB"[ b3+/᪵>&m\Y3N\#u22jB" D )r鐲"qm--m*2k5#L5 `6iK7KVdT:3D[T8hab03R3ʭ!I0 8+hֲ8M]KfrX泠^X:XwL/+UzJ9%栯>*%Uf(g.L[)ciLFz> zl3fXb$ ("s47ٍؖ`i_h LJ4aD%q8!(d2!T"IRI)MPJKn)mxZo^}C>\nM ZH B &ƒJD"DNMPpE$$ڭBȵⰰf~BiyZeݧmo[خsPe<2r00`ߺt0 YEP QQ~7y~3]`o./8*??SaOG~ ͫAv-O JLfP"@& @dK=Ne?Nn=+6xq#@2&qhtV\6?ʑ\Yx*1 1Ȇ2u5NT_硐QẙߛwBoax L8 :M;ͯ(F>&5!kU>ʓ12fG+ˣPf诅' #4".zqu7p-nWwV#G\ z8.+rqEjӥUG R/C+讋+qŕf )/ ^^jV@Uן]:eT5wHEбr%}cB(Խ?.caY|g a[KGsQcwnaXj6j |J.kdCREM!DS#D FSҽ1,=B캦T꠩=AM(kJRM sTSпMW^ifX 6X aANDu=Wݑ?9aC3.r7惕ʄPmI%5(X;Ie* Mj*^my M&1+E8EbV(<̑{@ejOSD׈Zy%5Ay-5j쵥Wr S/&Jo0{S$DvԽOϣ&=KEE=к|FZyxcFfrt1(TP[`d6%x< Hjt&-uۮ9î&J-R04\Sp378ve1K8#$)#t*mE[N,e\B@vN~UnLu9i+9N0H(ũRp 3<^1r+r q.`,xMܐW kGkšp'3>}q:Z^◭9D9m{E4'~mK܂$ou@I JZ߰$]BR.'RA8xg5pCtuh3^G2X4'7zx96m([ImkM *t`Х^kgKdV% zk1-XTw4%9a- xW(:%4}ETcn~\-m> + ^J>_F,o9y;ORھi]p,%Wh+V`lmd<}xfjhEK}>-3!L9J dobX+2 *ۣFwb<zlRCl ` )>[ZNz~@Dž`㢄Y':$J`k΃gh0ɛZfǘĘEwLJJͱr3M>LrƦoJ%Lae(󔇒SWa)QLyp)qe|ŕzJB#q3*F\yphWZ@k W *qu;rūsPSoST|]w{QJr( Յb~o^d/I鬒 V]VEW&72x<U8_2LjB C пڊ;bt`dEJ+JαDj,5.W`>@ʸQ>\)+Un>/ƒ]uUɾ ʖ[I;#ElWPGZVnͨ׼~q\<2j͝,`wPή.#}"%(3J WBQk8$"LP8PS3 )6+ӽ2)@"Q^Pee㑳;*;Q[A2S4΄̩,7(' c8ek\+.Hm6_Dja(:PMJ)2F8$޵Fr$_i̋wۜD=xm;~0v!uDMC$Ҩ8Ԉ]bh ̰]}23ĉCc,cU6F 21{WߨGJ<D@#.D$w۾}kU 1Wm^u*L%;KT};D> Rtޜ9A޴\s9 Sfݰj$7.(]cpOYG;Skn`vs=i:wK9j9oP3dXQUBZ: 6Y ЗIox `ֺ[7.m|i"%h,I{mgj#rsHF 9yR cP&*HL3%Ġ:*~shGhե=P6*˂Bф i'S BAmQxi,h**u(:ݡ-!x<4ӊ#:CI0kD(QTzsPyUo<X,X]A8Fn ee ,v%l SKȆ4<ֺ2a>K+5Rl{ XU; V4ozTF)w#Tkt T{zèl#Z Ü` U#\ "U+wIQ2bkQ`_&t6Gt, |FEEt(MUlA1n2$Xm#d*zSJi/j3ki@m 6Yo]1s@ JAU.1a RLANE΂G iB:?%0:Z8tRI7R!K@T` r 6PlC@WJo7XѦb:S %t4GQFh֙%@ Q 2|APS*hHn]RETcܳ. R^"JyjL0 !1T({S!pBY3' Y*kOPM2+V*{M23.}" 9ܡB{3qJ|lPBq|1fPTԃ,NZ'$_r{0g $OO|szl]D]eb:\9)>> ]{ #AˈzuC.MAy1k&! 5y֠JX3n^!x0 M`Y+5v5+ڱ!,P" !PVvܬe+oVQrFL"|R:W|݆i63+I@̴LZU'J)C2~ȃ 8wG{w;0qVcCe t׈Bj(4cvf:PF.ot3,*k$k4Y,phm@ [SW`$XHD)40\辠G÷h5W jl0ZoK@4؀u nzzq߃'IݜT$;B`0u0 gllfѳq5eYPf\ _k1i3jFs5J 2z31fr|1p gid| <%*`O~@rXr* P.h727W;4Κr5wt `-.hH H*@z|!(!=`y7ÎWVh'uSjCn ẟlJ'yceU#W(ᶨHitr裸IFH 7]A lU?<66Tǀuml"siW zYTAZQkԦ5t&1jGb+Z{Ok[r3FTm >@|5@RMwUvAi "Y5MD; Z&Z #Wfׅiq0%#p45zB'aPJ↕$5 NCi1[.ՂʠҘ" kEЬ{T I-vp7銆dքNj~-%AE/~@s͛[Oh=@$Yc5QP7/!V_%~jS94 Ok>=~HG7'mO?ݬa7ǫ\lqoǯ_kϼpsB >'7;^}T:nwvuGv]>=wc=ۮv{i $&w* UcI\{Iw`\ZwN > @>hg $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN[b@@k;_(Z $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'uE^Z. ׇL 4F@'-'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qƜ@ {>a}f'NM*<a( dzKoȪ(N qZN Hׂ}AN_}ŇՇ)B]_xY@y+ y:xu8^&p%߮_Eѫ.Zf ^㫫hͮ98V?wg*9]πEG?(z/WMGk"]"v]d.~8n׽\i,J<3ͮ^I33E-H~1Jb ЕaQj/GY]y0bFb (BW/&‚ [ba1thJH%p֕ܯ_NS8OP^71{EO.Nrce?|ݿa6O8WqkEN3F$>%ӏ7ov]nONwUp_]V6r{)g|2﯈0Xm{ #HV|-W72=\]|{=~}TWR7'Y ̛<> ɨXZn`|O Ju`҂ v+O^;ZkP9HW|wz1tp; hC+FewK,% `CW 7.ZQJ3"*’kCW 7-F]1Q0=N.c)z'dtr/CIrN-]e=Fܡ BW/B vQ[ ]1\BW6egF" HCn7w{L28`'AO[YԊ UL9zŘPUfuvj\GWOBWJ#ڪzXrЕJϞ%UHW)˶lS'\Ģ=ASpTs+8껡+~`Jw `=h87g:(/]aD kJpi ]B;]%wt ]S.^éC'$AA/..:Uz.=FTa0AFs.^(iTK9k S-MRT{C6-%~Qp9jBXBKUBynvt4tH=*S!1Dh@cƤ`rë1aMF%T\>zog鄒_ Ks,dmlRȔR⭺ wJp+[4`ڳ(ּ?Khٿ?Xs;ҩ?d=E2 >;]%52 se ? %EJ"q +ҞՉUBHwtJo0~MUe[ utP CW0~V}nQp5zw?ǡϔ=8JgAW]sȞp `n ]%5th NW %%I3d]* R+Kj ]Z\R +ʐTm+u &ZΝ%{I1H 3)[CWWXWVfHWԏTXT1b[N&݆:g]϶171|ѿXuv^-;~KȼÔX(%40`/!8؇A1N~j*.-$U|bV\ΥvWq W`}X\Y|qH?r_~L{I__f+g,{Q*y-|1neoz9/^O@"m7z 9o׾JWO6_(ͣ["oV$ݎWŤ`J߷*և6ݒ6}ز6V٤S+})-ڌu%of<}ܴ.g~eC`ZGK!z31j|1#,RX:+%SFO+8p? \/Ka*d ӼdDmը|f]KyMUoݡ:Z {'0&f\Y{Ό *> lsJbEww.`$tHR"7|/.0G2'$h>nlk^g~wúXςA*c.{70C NVߴ8~c"Oyf R}a9$x߿B ԢW 𳟻wNzJzJujW5.-_*bՌ4Zic 'H 0>r8A06g?N}vfυӏS7̈cĎwbi΂Js) !o46j1gqނ|YWQF #$$JS/gN R1I s)aFl ~F x1W:ҒSy6̋ŎwhOưVI)C)% EMqoH;^lf)8Y|HOGUTfY]>w5W10bCw#[Њ=qO7[0"f .X%a\ts6=ȝ} ?5-:I]L6F CJtQT_-s*,B>\.vW~%~V$JևY퐷{7}Cֆg%|*wn j}nfP^)j§Q'r ϭ͟7)E$.5SzMԜ9#5QjX`#RYC #< #$]Ը#vAz͕0̙Cs~GlTrHq"5)jHrm4:[)лvrF{ñ#7~mՍ1ΏzT& 3Ȯ &q}rH7ool+a?gڨW~, "UUF+bXd[+Y71yNoEGX -s'z>DPK1f2rj'!qU}U-1IZJs55 ˙!Ի*?sa ([8 Xfe{a'}I~2V1kf?@{fk3}9|g3zYhoKW#ht㮀 cy:,Y_K×^bȏG ՛}hR?k1 (cq4Rb{ =r+ rC&$e? H[)Bll6̇.?H(V-nýt (Ft[> ~!VzҊͤr`_ޕ !&{o:tc{q$qO?,ڠ$ |IWWybVh'ʮS}˭~{_ Ixss:o=,x͍sH_|?;&o!T+upϞ oa|.`:TFeժ#C6Y.wnibw% 7vVFN^I5݁="֝fWk D  U-8vz.Jb_ occLSQ58nXF-!VjE/e/0"^wB4c!DʄH6.4`˘ٻ6cxgH9hv YK,d.fxOd)n&WD1B3o. vbS$F+tP:(c Qڼ 3J̒0Kt[& 2Al2, z襽.pHBcA($%nuY L2ǕhSn/X+ I?/4vu[4:=O߻c:]uuL-6z4igW?}7;{}vlت/+FXgVǵ"kͰ~<ݕ{w{"Ӟ' (6}Q+7⧵\ Re3Hu_MKwq^ ]q}N^MxqElPHbQ!{,Jp X^WVQz'ݵggOr/H߼þhc۽q-{+YoP;o''uLZZ\ 9!DmR(Q'$P!& ՄjƒE5HOZ36~(% T e"M1HQlVEQ^2oZ(TE1],ljm gHDN70MѢ'GgEQ{_?u?˝} (M'M}ql&u#w8S)62ɲ&* {QRRde=)'P, BMsIڕI7 nnYϡej&Max{=>=@i&Oic1@GFZ?6OIqS,diH,e$0Mƙ,9~+4RG AO@NAx)D*Cmـ*:cg>kC; .ۍSl5dzGݦpJjsDd(4FDe\'EU@I^580=`)ۉ!S0)  MId*IĞ57q83` nkm<6^W cEG3:GSYK'5/([z)lL; ڟOˤ%O*8yyUx(*ޏX 9++'՘nǃmQNJ!tzJfbZɺTuęOKӐY\ڭr|v)-٥cN>Ly>,_u;0vmh@u8~:O~aG|z䬻A4j|6COߏjJW9_;G3F91דG<GϯTlǣmyz  -h? z5XDHM,b)iukI!K 6zgCnQbw;3;OѭuN@8TI F 5Vt)J!z ՈX 2^8&Rd;Cl_$Mp0%u~o⬙= at P%9I vհn?^xA}|^zVۻخοr.XYuII{IRI貓@)HsPc1U}t] I7:h:څݔEٜΨYBI;A2Ulp;#dgþ <3:*)hQ ԩnI!E$Ev‰쵧`-ٓv2'k5=kZGr%cNX\u: J%L!;3#J cr"H3oIc'h#>{&oMht^F'gBtrqL~wϾ}]_7߽<鏞aYR3j@z#s׺/=ڵOv-omD_uV^skS|>y}_mIۉzgTmo'T7khJە@4?TS[Q>2k5MLFiK *;jLb?zG0m#dpƞ_L|^tn ´aQװjښ)WM1Ӥm%=齵Pj6G)qHR条| uھCOZOxx8n;Y֞Hg P:`j] ?f4qKcn!We2:C}k76s|pCb:{6^rxYmܫuvtw^0Nyί*aJ ~?Y|֎#dõ;Jor+Ny2z~ɯ,[_)%Vj;A%."7,y)#MM:%:Hh>UcU4Jv>_I$Lُ?-)#67C I%-W8tP  PVjca:GզU.N@F 6\#XG2B6o)ˠ(Zj3I (d°R^umNlؽ\+`C~mle ]E`HqD]ޫZL+S }zB1R#F#+lT¦ÎJxuIVRшhi#YvPTp3ʑDH,*@SD1x#E%J!Fa@8HP T&ǀ#1 o60*=Ĺ,>z囋nЍY@7Nߝ!u 5%pWK݇te dZ P٧dOUQ}[qbWD+#\Bi7;RJ԰I7(TiT)PXj]а}|r6"]š)bvq[X|Vk#HJ$tYU.ZaΗA{w}bCwN뺅f`%!W<!Nh:9z~YmM)z J[Onv`*W?3jX9$EEVKQb+YcFWCMAs{.O _,Bz CXQ.ZVY[o HG#9H8Q\[ '2uBvғsA(,k'rZ$%bP=Yo⬩g{W!uUwLT>/{ e͑ !g+ MpKQ' 6x9xTOx^N?/j(t@+0PlljV\RbsVP+]q'-lQ׍ItTAx=]dk-5xёՆ,,opƤg1i5q2Ee&6! ).G-QfDQ)?..I_Fe[Zz0ɾ(_oSdS=+_dNI"%q4&.R։wp`"*#cUf@zs0֤͐M#KL^McM -"i/if|}[ QKQ䑊MWB劣5TWdJRE^46iKS:7E="&1Vްݰ{@$e9 )р3^xP`.E/Q0E*g41Xo\VpʺoAQ AM!2$һc r,&Ly)z5OsdNʠgQf!\Ȯa|Nb (4"X2 :G:6)#"[׾gk-A[szɠ!H@ZV^Jg:,hǖ7ƘWlrѐ֥ Fka$Tȁ4eIsZ76J\nrr[?Y %sПѮVlK/s `ȤB! +Id߬ƨ$ BQHNK-a F9T 8#.oM5( 'K:`5;]]kZAWґ}h~q~O}4Z『 YQ)]< ^pH6 *uG; ڟO+?qPԶ*?қ6\y¿b-$N\n'o'eOdV(ez[q`p1>6X>'viH2 ]zvZ`2|յss5#if>/ۛHAl#ЧĘ"<$g)5I{f8>fX PYyʯ_EVuݝt0yQY ai|[AWUqY<8Kբ2&9gHc7L. ɗGz (igdηEVX|,,CפL/gF t>)ŔdCC{%˻[j=B/+qhe--w9;%T0_vC9b*Np7ᶔ|w|ؐ%L( l%F݇4 ۻ^}nvw{7%.i_Vbu9\ܹUTJԛvw~*_3{{[Cg9~xmឈBԟj܁Sp5ha7\ŋ:f RDn!%e_Q*NxxWkWvq\ WN^±]e:??赦&_СAq7­y7IVu|2q:ڳ|>yun'ߟ)O\fwU+/;/GldKͨ^5@4X%I8޲G0ȵ^8#sw'YlÓ >;an?/1ѠߩM{ɱ6$Qs-9˝RKfZ_!+x"5L9iAܑ9s%Ki8BR Rͽ׃O5@9`sY04\`D$| (7p12P!G<:53 Z)(Q87\Cj(po!Aj8)175η 3w'tD8˜ф(lګ9\ ˕t ` ,nh!>اMq$#-OW66PYo~'~#R2GF$=tไA:]~YȉR4gXZea5a5X~jVNT`Jά7ZXP"1Zj%.0o5AQxipET+0CG Em ԑaYqGkZIRQ!uچݵBLtJ+'A2kRX_=}yt>N)=E@/^/CQس {NÝ)~a0ZC{NT {ÞDLڙϊ#e$+fyT8G[7{+*LAR}hpvz-6OZ}JIQTH"K1ZDr! jYkL#ABZBsX#r&"0OKSCؠ{< Bu*K2uy(p|4xOXqi]Sݞ -g*~-y xqmσ0c&\(4V SH˜J"a8k%u /hsg{N>a> OݠEOh>2܌G=3"On]o-Am(H9A8G3=R.Gd\e4*k&scF͝QFGBKT KISb@Cum85x! xZkG_ {\vSrh~FiY;]w󙳘e:UiMR0LpU.(CL;mp Fy>8<џEG2쨞" 8f;"5 M3G$*jjcXx oo.U~tvuCa8o5o|6}}ؽOOYAb\3e@5gHM:q)B=5눊-bpl,N,B3  NpA͙CP丰$byurߘeIWlE([B3H0 k#k}o+ 6KbɬFAgv^y9_qR%gD8]eUY{0EF䣂Eo S"e 7'wQOEKٿ$DX`y`^^xWό=S׻ij;7 Y(kϮY*m03t9L>#sSb˦çd:KO逘A=abxZ=W)r^x&xL(t:K k"}( Tg޵,#Y?)a|d1۶yIt4.F]E]e_E,mmImR.kR܊Ȍ .fT./kL vl3okK䒹$ bKfB轒pH7O6IdnUe2?$a0Ɇ&(Ufg[2~Fpt@QV~L> (Ʃ9'ip5\jѧ"B+C()u˅ޖåCxz?EI^[w>Z?tL&ѻi]_퇭?~5g]z]ҦlBlVO`uÔˌGKdɂM<po~^]GP4c0C[>J}J8SE/tA#u=L$;*(^6|ިz0j5r}ptۡS>2\n)V6 pz]z[Ch {vl_ńyS |2l#D]ݙ|ٟaZݤ@Sxٳgg΢ J\s&y 2>WWy[N+{1Ҵ罫/ݗ\zՋVoI2, Ix}k:=qEx]GP+_xH=]W=$,/\1jqm6L'2z4 qCsv7Tݵ^4͔j1a)&jX4vP;㊎&;UEzwgs n6ݠM_^x6n$К#j Cmd q2!1&87, *-s)m^-F/I0GZvlk[l0ԫ~Q!r5ƒUG"QJ1'Hhb)!J_w0`/cۙCgt(Du$aEPor$^!$ora&A¨\((VOlG9)r*25#4Z)& 픥ba!XjTF+NS/RNq,1waʽ)8!% 3B`4 qk&qW]>$jÃg~cjN+ WmA?5.0h:/6A4xuOx$gݧJVy\2 wdiBz#ٴϝkc:}di  сكKj,YD# [e qB Q{R*aRZ$9ړsò/VG~{Sfz-$!iBXf%LPd"blU[Tn* } 9Ԫ6E[- ww+N40i+pڿ.Hrϩt 8H".YVxiJep<<4G]Zskkьa6Q:^IqDkXt)F1|K4ek^Pf:D$SwǠN" oGH/9IE$s2JEwZI%hD0yzv|{>NS0쇙 }+&oL2N##,AsLx<(rZ9sC)KeEgehCqz]sTWߤk&]:g]=MeuNyI~ 4vS;z攦km#9ЧQ Hkl^_raL IVz%E6=,s35=]UTW~~<6^ԡq]FMS̱P; }-iSC)Jer), Gtqfօ53/ȟꑿZ|l?x M^ne<H`M~;]{5ﻟ}s.tsc8$PG㩯'?7N.a4m wq7=}i22ِeu ]g'{7<Ѝ+,g=-op@ e#^\\u`ND֬%2szUIU!M0#UB^J0*_<L=Zu@37hCZڿfb]|eڗXh8HKU6 !57:Eb xCuޒt 2ݞ Ԁ[f(LI v0x EB?'"$COO x*⷗,? jG-Y TLȘN?C|[3c}t)z,(m] TZ8AC$K4)B΁"T'BnD2(7?zKhGc% PRj"ȝ "tJT@A!T nr6Cĵ=Loabv1ft;o3nKvV{LPO1jv AJ+c v]y5+|}I9˶HJ@%d*%=̛YݳxAv'ZlԨ?0ᧈˡvm4ylI^N˫D?CgmCkntY\GO#j]{PQc;dS۟GMjf[&Wzz5[z9~F< w'Є 4?z&y:2ɨv8I NJ5 ՞el^\"75x,6֥7'wYg'J0Rn\.ri#&^HW"o18/w6RjF9(jo=ljǼ?_^ܦy-@ut<T?G?ߔNq_iJ#tʣrw@;}iE-R*$og1o=?].qS@M:S5B*|]xDM,pj*$xrI(4!ҋ!^s =,ջ.n{ر*K^\@񁢳r:\ϕZ'{6n`y_硇5c6bEᰫ٬_L*HEQ) Qj)"%_!"kB XS5ߴM(J,( u_clH)lJgGUl-^pf0 >vl8 [ߣٻln<]vr6RTu]Yǒ8}6*x!]t6h  RZ}JZS Cj@X)!"HZʓ0K@cNQ|9w3u\k% rLɀF>2Æ8nUXkY6KQn]mj2mVne?vj-*d /:2f;Va[q0uOd6w]>1w~Yv=6V2[<9{!!%t*:!`{ZZrC䶝X Qiˠ Rx"o JԎXt$'R\^Arb;U ڌg伯1Ĉo}@dmJ[o%0p1$_|amUNS%}6_}, e(.Q'A%1mGʩzw[7}Ї}_<9-$ͳ_G?Gˣ?>i^/ebMi9F~,dq=: ~t>i[k|խ]n-[>rw/| }u[߷Gsq-'ꝵRO~4pϓ#_CGB_+.gz(YW~䰕!2T5c0@1$٪XMaH-6}Q<`#)b"/g`j0f Vq8ЈƝ1YUM>2 _D-Rsu2D"tY%;'G]ۧdp&-N DZ4(eV]&͚bAHGZ>tUY-Lk!->VTJiU%rVZ*'sE' nGTWcD0yz@R:6)p'(2 A Q\.TE+^`|I0؀xlE~_7}ө;9nA[k/Π@@i5oW/e%:޼ic>;ߜ@E^g9om(t]0+J$56_++{9.ѽ|~egJb?[RlA+SqQ (&H`U[@ི.ue Po18`Ę"bMr:M ١ 46M[OE~2(-pVoX~v=lvWlV#:z t:Q:oļiBM>LN8'e=j 3RjA D}U=o}cdž; ?mxl NNfkxE^\*i=Ҁ(#E5ZҢe6zads-r;Eh#:P)h8:<U 4QQV{p0`嫛РYu-v_sGS&7m89BJYHdPhJ u)iH%QEަvIȷww8Zװh}f0фy ܄aE6HrfJj~t TT)PS34F\7Ue3-DΌ9&5X 931#LdKDPJ&y(KX e]68!$3: zC ^'(9w*Ct/E I/Dž)`%\ pGl䣩{zUEGUb(D&E6 *~MJW}҇(#Ч@af4֯ng(tC!f/IZk2dŬ˅OzgEp7=7+ "iH*ZL21ʢ- [ld Y Tc>kFΎ|6z8٪ǖ%='8Ͷ ^#HDeD *F KFԥXjd,kKA~_P(!D_Bb$];ׅH9 -&j5ZdF; ݧ @F+)Wuc )d!$e-3۰@k҅h"8UٞJmW.jZF/cmr?DGgIP( DQr,lFnkaj3_|(l|1wiG]|lADM焑l >(-mT& Ghk0ӳCf;d_p}NЋyUџ< '!]?[J@` 6/˸֯0ޏ`Nϧ?BbQ ԮGRŔջLB,E{>55rGo"_| %,Bn%.t&E^/_W1eڌUG Kiw\&?/.^t.,&䗓7 V/js~1`W˦<Bfco8=1ߥMx7N>f]z6;ry6Y{zK{|ZlV`b}ٻs+[U4 ߩaY%D':FލY> Tnء+TN[7 _B4&@Oge#If(#fSrQ"6t%ESr9]6Xp9FaHчiYDJ NS>1OgӒ@;H+PB[ &DUDJ5܎1_ޣ&R-ږBAN gb*]C-e/#ČS 6.Պhof}e$z1|_pt8fdO_jKSM>]7Xx}][yOFrkmZ־}[>,yM 2*]%F'ΡHeg;!$jT> Q +WDlsX*)՞MEdѓ#DRqA2G&[ zʌ}a!Z26#gͰJ3Ylg) i! QpMQ}󋊌7->E'#,x/tW.,cr(Q ꉟk}lc26dly1\)%X16LKS*P=P!H%T`0t vDљ3%v3rth;F="G!ZUBQ`J6QHALCGDmlц| VȐIQebYFfiaXS!Dd:o!ӌao"㔫`LfDF8Jĭ^,])ڀR:2Jޑ9 D1S`-mY2ٶɏ΂mcbaT"ʙBM6@*HK$x#msDmWX.λ%s j+Uc(G8ŭ-,,Dg-.0Q(S P0II_diFx8gݱXеbץc}vȭO6vgSYwLb-%ytǣ篣rNwLswEO#;>dwOLa+#VnO|]N!s9//r.[2[s޵q,2&8yc,,N݇$*3HR[5$%BF8-S3Ù˶wvU4f]w2˭cshOVk>UZ.Fz!r֧jIlȵ`:3dzE{=NJZZ!:2DEp&m1LL<[Lw:y4|yNӷOGokRWvK2m1ޞT=Z U" /VILx&>MGMF-iTjI0'e^{b,qlaw-4.Q7@u4s`㐭٨h:&͌VVwc]59Gө֦k.J{jO*'SuG-R)Kk0M-ՊS2cTr`yȃ=A+" \íE+Ný Pk,>NjiJA'En׹j 1 242b0RK42-3\&x*Eu/;w'0zK6lH?ԫ-#'ƍqI7n;wܺw=g| ʹD٥|6R+/#` o+»޺$Iԗil7$X =d}ι^to{fOR<}Gf|2;Nqk7fkSJg5ӫ@?ʪejY5J2Z iV$ FʸVP[F<fRCt["RzYPEVBHM{ Vܶb/+F7YKllҽ 02[!:Y)A {gO!ʶ4Tϣ-:Tf#Ypu܃Ymru$'eIN.v4pK9*Ig˱:&SʌfR"܋D]%%#{--/!|<4=!=$=egaXe|*TL8-!1DʔR"1VjIO;x=Ǘ?tւL:\'B-"c$뀺$fK{kj-K&I)Ø;'hr!hPYWQ[$ yTJpB6e1̣\0)93N537k@\\8uaL]BZ^>=n xV006~S˿=l]J~W*t*m%%2J B3h$0 }aWoQ 743t``= lj'߹CO>iѩ'8ߓis+bdu4ŴP|Gn{pFsm.'N.~du˓op9YWk@Esl:́NUI #Y` A"<dQ#HἙªᴹІlr> >t>zOԆyz zH5\,WF>ƧZ{.2E{ Hx4AOTPA;l`#m UP}G[% J-ŤSD{!]O()`:EI5W/·I܍|~<㐼#xQ~&t;eU2ɇY>aP2.[>,04oq@kiL2.Px!y߾lSJΛk8H$9 EhYT:b ~['̟ϟ&~G5lmit<UFxzR-JD}>cڥ!{o&OѲCcj ' zLh#2bH$tNTZbk/3NQ,{^R"/{{%ͤZ8jb1y10!zVgg7!l#X5Zpk4N&˄q',lY'hE(yw9ɪ +f++CRu #vt ղBv C=UIt38EWCWVt~{ QNS/]okW;TvoЊ=]tЮS]Qf)+.Stut8DWxZ ]!ZaNW׮kig0@Nl2AApYͧ.'26EJVP[RaEJfttxBK`m57n}cI߶7m#%o -u4.C`QyuΗ[nZ\6&.W՝x!N1M2*y~@ ]n\5աfk1PmW~[O~qrTkr=&+:AT2Z+TҼ; y@ЯQٷHRΙK6.AӿVfQfkrvs:;]U\YvCڒ?B̶˸uq.fjX3Rpy1\{UpT55 Tb >t(B_C,k5:I =F.oJiڽTۛmMT; Hn ؔc \[h%]%#pzQ]`.ʡ++i)thU7et,tj%`ʡ+kd)th57%vuteᬤ=/{OZaNWRtute%Ѵ+\v5t(mB٦#"9U|t3>jWvFɺDWj[Z]CJ)mK+D:OWH7\CWWR NWRWDWWR ѪkWRb K[ ]!\U5t(mOWIWFQ;^t2UJ(kB6۩XÉ"j8ܮ5Gpڹ1k:]A0M2GAٵ+Qj]{r>J>.c-9T9 OWhm-9@׶J{Kp,94ע ”CWh%1]+DIUOWHWZAJZ'GuI&:9+kmHB0 Ȼcv0 bk~ȫE"5$n>ŋ %*5evU1y2ȨLD٥*v.g*hA J)ҕR.+;`˱+tU֫RٞΘ6J$}?ّj?x$%7+؃C Vt*p ]G`otuta+lp ]GYjoWWHW:^uI]VtN0XЂi;]Utu:t%n-|xp!@n{f6mh634]Z& Yi䪧1-O;;LbI{ I١[՝Lk:3~R>~geNqTV2ѥ$2"R J-{:A4.6НW@W誠խ(*(mD>E +l$ ]3Em}E=]}5t;v=1&$g?G= #Pb^/= {:ЬCtEj"UAٶz"taV3tUZ"e骠+Bs];`kXW誠}ŷ4tut%@[;DW 3tUJ ]tUPZ)ҕlde`٠#1R3P[_/K׍Y.̞Z XkuFc_kߡe>١eK_=۝ qRI"2 \̴^A+\A)<)Fr PCtU6՝+O}IҕFUЕc ]WW싦N gSzN0Hp~h~(9r!h [KٝWwhtE(]F [} q=7zX}ˁ=`DF L45Rve uGR~8őBj-7&ڬ5Z#uu, WՌOFX)V k-DyƵUǏZva|x8ge)L%03+38moֹ8K}և2oFoW#3`~.nizW|t~X+-I|ØZv?Sl?Տ%XP:̦~\jE`J.>ftXM.Z09OFa6B2Q.3x!к|NA Ǡ WD%qJu=UL,U1ݞOuu+XENQҡTL{ZH!*W*WX!YYDIX)x53`6pQ~Y<-k6lp.dCpVa+)#kgTf<&q`_e?Ho'7vo4 3!oMPјRtÿ^oyhx`ˏN'([/^oMGC_7TuC.zs A V1@2Y@pMd2^ t>`0!<eU]3~;˶}n @u*ÁxFHdH l[)  {h8fi~sFڨ'y^ nuݗfL3e棘MϫQZɷ;rಽ7I.4"!,e;att=KDfCLGrTmeD4u-ט|a3K=H`Q g8bR8A8%2RCݣ9X*LYȡa{IЛYX蒪Z^Vח*ӱ1$.Uuo4]Y/Q14PCc7Zg &!L)^ e FSjRR cχO+J+&.(Π.Y| V ~NLnfki:wk] >Kf/6 їΖkܥ֍t烺j,V9cB ?H+QTI]Wk:e[7T"JEzFEG]f2Zy\`}IJc Z"X{JRr%)I0C G.qRi>Brt9'?4S˖`Q^Hj5A#嘒1LB $لаNm :5nf46& (zho"h.X$1O)dp\Z$Am8| WD&@d M&763sTL(&ࣶeQ45u_"AzOiHm!ղ A3%$mM6QBNBA}b/ײ$67,T+ ݦP꩛O+5&GoP[+tAsb.y+_uv<دjèʊ1s>y4`"Kx.FtUL5,fDŽPCZc&iVuOkw8ѷMm ɠ>xtjsҁNYQ4^IUR6 dv2 +댩`ɸ+!Jp{6?fiQn4u;$^hb1|s76J\CoΞ{o̠E CR˄<\GL ^f1[Ib{ 0FP"lV w+GcM dH Hy "d4?.~sV18+DŽ"I=mfÙ$|4Il-v薯T4m Ʌ-]+*f4~<R[+WGQGh@:u+6:[zɍera{hi8܌ki_5~^ójw;ԂڽShۤҶҴUcsucYǝeرFQ9ާAt|L+#'bKߦ\ }qbd[/ǿ'[j ?^Љϛ3lW~rOq۽N<̻Y1MQ]kHn-1>\O^iIi|Y͈N*.]M*DcaY)@ʛʑz drRA} $zGDt {JWSBu=|{_9Nu.Ǔ(&'Yf99R#K#]>#*q԰:3n斺]RBKkR2D'RȣBFt )s#,%-اI? S}~;HiLyyVVwoKz6#|/%83]`#}o`"kVT^#*ܯɵ pnl9bͪ})xM(3Vav&+ >}96n۽[Bh1 bo{JJB("FXfɩNrʜE$u6Z{)hlJ >YVg\rgWe,ًݢJc`AOuwf޲.6C;٠ID< xʞ%:b&%UABD3 *eAdVyPن$w o19 $,reZ3~A)8;,E۴~/4-Ku,Hؘ !oxAB~[z2F%ZudVqwyl]"N|qLf+~gwQV~} rx \ t$ubFeT~C4ek 6`eFet1{2&fHԓDRl=GFp:RH2TjL4ƅYƁ\`]\(XυʅkD7EoSaqx}y,,_|/.h44_9c#&s }Y6Y :7:QIڌi*YAɕ!N47,1`Ed{Xh+JD6L#&aU8B183Fٚ盲{ >c1}( A@J%EC Y$8x3*,v "dRMB [+d)x!zۢ ) ivYk1$ I%'H86̈y?]&jO8ժ\gSVr(/y{^yŌ } `=` )@2"%F0,[t.$Pؚ|gSq(Ƈ@a몰-#wڱ-^v=# y;G*6w~ib)4SRWVYx@ +ۯa{eܴ'Neai1DnHb ޵q$ۿBӽ@F@n7ĸf#%%;vgH9$%v&mǩjSF"1eTPBY!Up H>̱0TلY~\ZYzi;~MNp[$+@hwץ38rxѺBzuմ=sUNxwf!ZVh;]I׍wL:絖a>otw6C6s5E\G]XXs5UC|$uteDQ.{J"ͧo'k|W~>"!1 aTrr9JqA SS7(뫻><@&ɳPciR^rm Ld+ɦ DI_AnO`s2QzN)w!]݄eqvjF@H/co6:m^/rxژm?Idx9/?.bוQx?W侮Mb;P O"{KJ$B" Fu b:ז@ږگ&pIØi٬ GCT_Z6=lCUCjJU 3*?rQ()%cH0.)nدM`&:IK `㟳2z]O]<ڍknxfߛE=Mp cu49>M, Lõh;N e8Ed./ #>6N~k͏'瘂?׈/Rh_l49Y@ofCPGráDޜz 9N"5N pQl4;if4L?/aPҡtGijގfRrĨ>GLȡ|8: kY:|SJ0/?0WJR:R_\^RdN*VB\9bebh$ 5$v<|{9i{]OpJI}6eSr-AԫE#f)ս)UX⿼xr` 9mv}_R e*bڈYOJʹD٥|6R+/GGC1"}@B7»t6쪻d-GSj9Zoxjem 11O-oV$ǣEEULJ҉Oi^M҉`UŌᕀI"zjjc{5Ѽ%Ix-ETy2)V[Ag!Ј-#YEy Jd)0R⥥.EԦ̂,BF*B2^7JFB !1ԧuH;|I+݃yだv#C'>VJ>([ʅ~>'Q`$ rO@"Iԟ$%NuI@t ^xfTf#YpU3\IIIYj3yK9*Ig˕I\)VeF )E.Zʒcii64p*hq"w XT>{hO!l | >zir 'Y¯~OɄ2iRVC91Xb¸x)(&D@[i=eZ_,w] f]m6Z<]^IvHmㅃqf5V圁I ISLJB04J=QLHT2{UR K\8 8}-3995cAr c g}r9Lr1Q{b6C0--r< n\c>o[bdy2,=ܴ3# >]Ή1a|8Ιǣ42OBqRhP2+ qG/CӛPu|Zu$W W@YE+^nNrJx%u\N&-#8fbkueyAM$Z)9q1J,DVMTdMrbil* %%'ҘR+VR2C7ԦJǀŰTDΔH]GΝ^iT+dZm9LKm!zƕΒPJ,HJ&-LTDH2 `w$˘S ePs}`9GD'"dl9tԪ|wp4"`KJe4j04B}cWP2kKI4&9FX*h?d{HR%9 0f%3bR.a %\EAžY/KK>wt16L6.P$HsPTCY[.В;8ֆAhvI){l% TP,RRR9`JM2d䲥1B$uRNc5].vQkZJX|VI ȅ/TLLZMY UDJ,*; Q;CC3H&*,F2J qhLV"\l%`:YJhf0! ֑]p %zaX/Aq)VkPV@SvII92(h>Z`-ȩ*,KԎ8d;e$*)J}ԡ @s0̇VDjnyVƘ'F0&E7)s&c.E (tw BXmĢ_(Ektfh52d9 T@c$ "2f7!r@).Vb)7wyHrI!;(`K@,~IG)18JǐvIlQ[BN !&K(!N &$t8xR` 3)J;x Xd,2q~8+)$"7]koG+ḊYOf` cj[LiEqMJlQ$1Eo{oUl@E4GRvC7HhPH_h$_m (ĩhHv\F*^eFTGP"lDƙ}ߘy}W͠lZ"5ȲDPZ 6,@E@F ^ B9tECk>84is!>38oaPhr@B{9q)b֛QQǘ@QEIJ4Bm*ع >/vi{홟UA]- }ACka&c[B1U8xi3olD>n:@VZLAG=J\C R( 01-OpHv9ok=/3*XqA{jN(HDNW2^i C.nk<Ѹ<@= KH`ePnuG-dUP۠p /]E9P#+ПZDbYьmC7YK1Z1ة`UDa|W}tzdh:Oż ȓd}E>`Ye>ZbE]P i,!zKSv5HQ^ @oD%|}z З9&#Tuo{5()P"T4evoPJ%lFq[O AuIBPX}t"+JPl5Pf e{}6=^ 3 Eԑ5;5v7EQQILMF"h4JRnjTjDлq舌kU[Pa2,,{njpM"β@ A36%VmEs_ /tQG٨I£k4ZIP:Ek*5^ZzrDu,JhFmwJ*tMB}W|tp8SLƐZJmm˛zܽW/q']Gػavqt2o3IVcU Cn=d>jNN4j$ti#ll$mwK($JHj0(kM!J՗yhЛv( #nO*Vh &%xȀ-%5\ F mMףNJdt UA2!5F-HOOdPrPz[Zt}f=j-+PQ|\yE!8ij)K>)pml wH._UF/LʴCE=FHmr3}5#;L 9yG>[~5(W (]YjphAg9MwZ,Jv -zQؤ}F%_$5HdX:-lF]Ρ]뜼W&EBb?ͨ)Aܚ{|t(G=P 5fSMГ,ro "&b9\YjҪ\ ބ$CtL]hIWKl$dip/hm.?!tA{gjp N}CGQ/:?:ONׄ(U4-v2tRiL*/\m/{kz]H0. :c<;/NgISv~IOgEycB6fla*6f]1#γژT0{L\x2mhVZkBe]LKD~G$ '<{S-Ȱ~ rcGyt}ͧ>dc 6skʃ0rFoVW-:J;OoUN5KR)r͈T&)62GҏVqC z'"u0nׯ~գd|^mxM7/7ܖϳ{1>DyʩL_ FKL֗WtFE_- ;sиumP7mޖڮ.Mܓ>៝o%RuYQR$w/ІMu} -T,c^85}t$KF'bMU[s7_FGYCj}Ե TZG6ZDw" z;{4O~=Ejj KzOi$5{PJu:W%}JJ>a-Uh~x{؟ٯ6;gGy Y9:}f?~o?|O~Pp?_w*&Bk } )^g?r&i?__yb]|>_ƼՀ{͘{{{j\??|n^v(?^?=n!\-4ںIJ$_rah1+UѺKTg> EzmgWe&^ Kj? Knm@@Ļ-1KhU-hej᥇~j-׏ߧÒhx6?'J׳-|syתG3J^6GW[t$iN'ֲY&Ҏ`^2lx&P_n9.7 kt5/-ǽ$|vQ!/Ǿ_r\]OA|=MS v?Z|5-3x: yq=9-0oZlW^ﴑƭonԓK{ԼhMaP]c 44DEVQ0@}c6g2InE ߻T]mnㅬ١!z 镊Pm6V(ۢm.5y{gE;pJ-9זĽ>nWub{&λKwgM7Ui<ƚi̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞i̞iúX&dV:̀Xӡx=JxAp>Լ_ft?|2TN;k'|q6Tt΋>PXP]FѱE5-ZYPʡP}j0h"Gan抗wHHϭŋO_9]gӵ.'Rۿ7vW9k5A N*bh7;-]^[=?↡X}l?x%o,'i6RzP#n׃^/`="Tq?gp鵟38ך<hgpȇo =88.]?&Нy뼉;gLٻc?s9Ȩ5hW{4ߦŔц*korFr4};w]4bOw?}*jLREiof:$fJ%D>L ZgAK͚΍w0b={zTh;[pWwG/nD9;*澖du!AU$&@.#6ĝ.c>>ٿWں׳+xTCBMZ62 7L!̄=ٯ|VTj ~vR>τ:0hsdT^/JTWxN1Dl5| e90/: ,>;c~~sU\py#EW{}<~߿yxZ}Շ-y+RBuxwtzELjICno1 k{}eoE\1`V ثɔ 7 @ eˆ=,pw+M=jʑ$2]!](U+On ]QbPvG &DW8𕓡+BvPjb* k3f~t(":][^2慈1@3;5LWۡuyj;~ǖtez1^ ѕ" ]\BWV]+Biҕ=!҃{d8 ܮ\KLWCWF  q2tEp9[:]JBkDw3:P5BȨ}x^mqJM0+;5;Thư4 Z:=i+%zOGJP޶[J,9ܑסNou]kkeu]7ejTe "1 ८&VoeɹUSѢv[ӕT-;n7ڄ:c/:3).wDע"/J"]\6ny`1}16vo;1!ceھ4}g]9Ȑè[xx"'ѐhShs)hBi$/㹏tVM؇ &sƓOBi=՟=o}!4篮쟻 n|hj;4ۡ;o+t[/RL+kTztE(`CR )'DWsNW[=!Q f2tEp{}+Q|uE(7HWhö́ EWW5aP:BO0;JB+Iƥ~lz>˚<pNE>Žg" aJe5^t9W ։]/%/e+n:;7Ю?dGP*tt6!N>n;Lm-/Bvbj*X}]]\?5d_]]3c*t+m ,0LK.]l L0AF,"lg}^5oH%6-:bQW+]Ff]] ]ii*RǮ"wz}´-HɻJk$d{^P";ItȠ]飻o׃vL*w0]i5o*RZutW]qÍx9\T*r~(H+MUR룻zJG6H` ]iuXnRm]=~"㮊>n'ifwm3GwDD9mgzlXZmviVFobŇTKK?糪~o{WN*;,+Ċ)ܟ-?'%/J/"X?ԙs?y_S+ij{3x]:LGZ|U 6=R梆[#^V޴wB@~Q__zZ&n0{د͖-? p~okH>!g}ʲN6. m~շ[ NO NrRRlRT{nYJ>gm:hSd _^\C V O񲺖'" ,gtgH䡄X%f6"'$יdK z[fk,Nы/zz|(}<s81)hb;lC%>;ZTZz:X\b|9zлKvQ.r`~[tvؗ|_j;9}"j8^˷ˏo#z+d~xJMs:Hxet:&QED1Z|CnEOKTیأ䈤+ϘB"'aQ2t:ȐW zW-!5j]1%i038I0=#k9H@ILQ%6Ĺۈ稫w5㰚u,aJx h Nb9W?-0' yU;{y;Ӳ/XetwC[+V?~3A{Pg-*꓅}o,;H=2t֙{B?b8 ~ԟy2e"";|2t DYgrD^9a9#)߅M6*Q[.T?/u>)˗:ԡnzl?sMSSu}&V\sL¯J3Mqיk%k5z0O8V|z%gH J rf,lYܽ{Tz!ti-wuНY_7~t) [vA xmjzBI桕Gh]lkHνvFFAJ:Et,AqF|Z+͑9+ZYB(L(%k#, i$ z +g(bcI6gK=A^-tcEZ~2G5Zxg7VQʆ˭.7p[VJW|EnÇqKUx^o=r1d8`L9 rڙ rS?kg::OWYn/<@/?n_bMUMͪVnS5o MOP﫛Zpn^oq'JJw>oX?\\p:YF\q0u|?u;qZZp#:ks 8|*u!2v'_\ϧ^.Cӵ-5DDg6wJʨhGZR-Xd_"g]7O^AN {eBYFw.;*} 7y~SL8,ϟ~=7YI2Rf?Nne:HIE8ǒ8GsYh`$K[CFY;ѳFE G;Lp|u@]ÆӉ!_7NxR%:O9=d@߾GMkpq%~V2EL۬&cϧuwlbS.+b\%U+TeLGul8˴HڽنmֽZK7(\p,䑹H(HR"HtX,Q.*Ba͈P&3.T.$b *ȬkG`"7 n8-b;Sa/.gܗ2^}AJ\(C'i}qa=&A*A 9=U"/⪃IWXխϮZcgK9yz=!ӛ*eC=lbԇqnt J+筭d h}T*4t!r34Zy/|DHe3˜<ǘUp3&Ig齱ʖ@Q044#%x̒ZZdH1 LJSpԄ$-נ_jŒ/)̞(pOS H%kBBV2}QV\lD)1:xn ۞Yܑ$Zͳi "xF!c4'cs R^F6`HYi_NA[ sd^[ZæĹWڢY~كNI ,aiLnC冔%(M>dt.";BI/%!HJLZb8Znc[$-`x`3L (a&8E^r3 Gn۳셩~1 EF B4hcBȳl:`މ1q嗸ߢt-Ys!x*g2+ǢB g&'׎:YJաZMIZ꨷\BQF\n O 'UN `b.duG Ԋޑ'yҍ/mMr%xc!9"u0.g )l(H˦ ͏ѪNWi3XFۄi=KZ[s#"$[IX`Smfg+lskF'#X$Q71hAPiPZD_l2h͙Uqp-ј:\ ը{__=RՏTGT5( Pf M8Eٹا7$E@J'j#eBʜȎKR[Yr 㒔 I(,:pJgA(-$@Vw*b yhF`AAG 3$!,'DkHHnx`qjc܍SgY/n҃3?+!&k($F$0hrTsV .^Pfkxy %9ZJ{,E-N"eIFQ(vfE2"1KHܥA7O+cC},Gu,7Ik3]bܚ}v*yJ)mWǃNicFdp$џ 1)v)"+p3T/p'g$}ݶKou&5?e`q8pOÁVKud&qC:]}:]4ufNi%+yomFZ^IoU0*"1cͿ{mz}-E[]+rIXm$ u%2a+ ::rB;DѣʳB#je6Xp)H!E)xf)9b?]o#7WqQ|ȗ. do C60H9Vtj{&3A+$[~a eKdfWw~Uu=|IhJCen5qv45,F.?k~θ~lی'==;Lx {7Eح6w7gfL؅wz2\QC4 Q4<7L]fAc.&4|v-ގnvȆF6պno|a0a6loytwYĶwa1nxQg\Uo./{ze1/YLi6}' v{YU˭w%+pp=(-Kv@_5sFcUk㕮QWVy$\C+"D$ۛou\)IQΦ(UD֖iO28L9sY+9Fe8O }@ÃI~p|{Lm5bͽhaqVkT4ʘ8~KmWV(gZDnFdouJӎSo eǨ#ʎQGvRBhU0qǬ RȌH:,jN{GbΡS;S>6 >*f!OOM5v]ExV8} |JTeIihJ:t`nb` t4 T>+^>a" !j^Pa6n\\KjujWP:ٳ\x{Cw']**o Q*h ;qYh,i?!dhGJJHtEW51[[YrAГ"D`vAFIĦ+@62Vglj\V[{bj| ,cA^Mak4k~n]?M>Gl!aL 9*&Bä!8g٘͘Q'dy]9X10U[ZnC1< &CRؔD Ad1(.9&c+#v5q#vq< vVG== 8L̐ILZ$a&X$f,Q]83KFlԭ>ED^y="nbD1ВV%sDE"B!;82QS LD d<-k@[%1mlc:$ h@3Ctd#1/H#K[eD&vDtQv~TZd_\qQFd~pg-cDt`H\#e,;Kk$tţwZ!TC*fk7Ѳ)~|Ge%п-mxeW͢ȕ* Bs=r`H(gMI4 6(h«xCJT08 Y(XؘEFv0kqn=Dc2~KxMK+sCa |[ߗ' 9_rx9mvӛ`ofѺ~RZՍ>gKUJD Q70$#b]f.3Ge1HKkTd}$ t<ۘH`'$U=  {z\ǎm6|Z& /RNzR1)+QLW񒶘;'ۃ`_tV~2I"7I QT 8{Ϟ~+T|pN(vZ.9SFt| ]Zjus A ma@u͸ |h3N "y+td\eYM΂t6!sΆB-1x]UGe5-;dK~F":,N2' <)lU17@*!9KiMxIIim3&,eDEcASx&Kg-֐(R&$Y:Jg6C䞣t*kT`&b_㨬8޽2rk˭QIb1Z :EȲ1"֝e'9*k}=sJFYS5:+A &ԳHQmn HR / x :дxߺ^9mPz%TJ*c VM $J0HDed!>M3=woֿ'of%#]]ݒh?p澫cwϑ?х,?iGW2Dxy =3\i`xד锎Q3y%˺R6r xh oFBZ +1=~LvW05!.qSۢPQTJU =*rF*]EN%FJZy`<<4.z) sz#? ?96Cl?ʶgYMf] x0X=0D<G;û8iKs=ic)p,x?~t sEԼ9=l:_[9L(35Ȩ" 4/dtQɸO5n Wi7|^lr=Q9qJ_ұti4a:q8'(gg3u&@o/RVK /t}R1?|0&x8Z~N--:+֗)77C|w9$kdcʍg2hC'7n5=}'Jۚ?7Ys觟:0B [Ag@o$]hl(ϿÍ*xLrg*0Dc8ѐ"'Kd9E a8b[Z^asCd=LGP2ganha>; [?pW\S@ir_m Zھ4tnʻ0vLkj1G- )vbVEޏ޻wg.z+L(U8h8C7d.B4&! hBeZMo,tVEalhszv,/l=>EٱL_N*2X! I94ڋh#dd`Nd"UZ&u3)NZƴ6sѢQE![]YA\p~. X(,zND2#Y  Q24eQ,eD(Eze9Z}3̚74)( ŠhL!bѳr> Ts2CGm9U%!RO'MM]7%_6 ?~wכql=;^11Vȓ[ĕp*t{7"}7os=!"g \q4.Z}pER5=\8\9%^%y}'}%v,)ݱy\}MiNH`ckFEkh :6 o@II?&W%C1-cZM~$>V" !"B\ebxM0XWc]!ZzBvtutŕ2ku}Q &r}w}h) rϽl!Mc} \^"UWŕUCQ=&IxwR , !c4P³;#ahf`0}1>[84ѝ2|`mդp<$u"'6s?K9*eGZW4NkǨדp_xBT2-zLD#')k5F_f 6vOUӯzYIE񳟯{}mltmWдGmg |NE~f|@c!\C-v[ Qn%m5a롯rT-qG7~!qqo! e $bKd.ʠ708Ș >WW?)THji . miD;>FTJ|I1ԍdqB5 |uhtej΄8ut "fh= ]5B*+ӀLGW=RS]!`Â+k -m+D)YGWGHWLe@t%e&B.ݞc+.%nwtUլt(EGWIWV0:*g~]>u:Ċ,i5c똈}: I3OK{LLÄWnv}1T~+[3R&q>V%𶖱ĖZNIP^׿hٝ0V4NouBNl}BMW 7NQsP-2kf5K S~6uDM^+Y)Nʜǚ\B&,N]/km{eC Abx2fp!l۽W]= ]qÉbCWW: ZA,i;]J^::F:oX`  ]!}ӕ0ޫ,ӕom:W 2- ݗG-!Jqa)Jjӡ,k]gi ]ZU ::FbB +, !\!C+DZo]!ʶ9]= ]qI 6ԆBW#+!36 \CWWP vtQҕVVѧ-0RC ں ZkٝH'j-R"NS@Y $p9r DrV; İD)C='TI&H5^$B;)IRl \vg6F D*F#_D_#]\q3Dӝ12cw5@ӛ8l^ Q%*- :BW[6@\^*ҋeN|LZ39wx2d/ XQIf1.ϩUL,K^g.d#γ|8|<rk66}j~o's ̈́'?#ގߠ}Z^d7gFT/, *3LI/|6.` TXuUL:ihAӒfdZx *+g:uid|#$sW @/OhOc>8EĠI'*I4 X:)c$/:N@63$K2Ϥ@|I-3K2qLBRϟ%{\E6nꋓg GӒK)>!KAcuWH/=+ %>.Vܶh aJ/3OWˏl0\*x颯t8p(aY~Qc:Gu-XDN_9|&:ӌ43UB%NYu *:<| Y<O"osi%S `^PrcL`FxXfZ(\S}, t x8lC+kO9%N%$"EfJrJ]*J0JM2#*SyfT't鹦:DSEf1`Τ[s31Ses &|γDt[E*lu?Rޟ,,5}_c4wXf>s^ayY IY<q5/dzCZnG?O}4ΣhGO/~7ߝ7/ygo)3w/˿2PzžZ{BFu/`Ci;z wǫW-ɮUs#TvEשWvZ'fǗ4OƗWC(7p eW3dEnE+;O.W{颈{Q* T ^|mf$\|͗izouNJ%Ŷ> Jp50u酟[`6[Vr1]‹QJ]dVwQS!mƩ9` F)Yjxv~,K|lۅzF-Z7d?1[ƹZ*pńT{aiŅ mݗt6e=LGׂSUշ#Y .z&4KX0'B&FYSGree e)vwv3Ֆ[LočR{_Vdr^xbf+p+[k)?gOSBRSF(:_z`0ƽfI?z ndeS4NG0bx[T4 =Xx=[Uʊe#EeGI(b6toPcx>)4NA׏>JbG7_DeHO~]ZXt4@B ;.)xKuѾne[#`Nk9"qj}5l][Kex% +/.WX'N$u]f4&X S<߱o38O ^\ [[u>$8"Jg_,9(vzj8vMg ۫&dyF^,7S mrmAx"<W,zZNyh r˷&gy}}}Zƒ/5b0~ `V +ǫ7a ~?WGngi΃XsyŧQ3Ӓ)ũvKEd/d9cg= ٳzIWBT:S(hUJՉ (LZ5 dLeBFhkKՁ jA$D S0CBJU@pN|\P#VuJզ@ůjb|Akbsg$v)ykR:'TeT9GH(9d٤Qt5bKAU h1@Ԃ>+j.-rFh1ꔡwq#~D]܆5{ܬc S?IG~E2Y.otoSԧeEQO-w}%*өg{2i=DZwSٯ-_3GTㇿ wNǟ@nvIO'</<[LZg%6DY餫!]S1J [lUݓ3me/yK Jɬ|%LCN@F[|QUh׾$RbFg0dmVDdlMɋ^Ms/t6MS+8QO\0 CR^[ iLN O'g`l7_bfȹ;[GAZ슮/$ S)Av$Z&!aJ6yd ^eNE)yt$ yFX"CHn N )%e7 jo :t$$%(T7ҨhNL)dStbIfUrL}4^&hU s,dK@wƩ݆snt3nofH߼M.!$Ӫr%J.3mN5Xc)V;U9 w!d)c:FUulWC Jd#R+ `V?UځPW V*Y(PSn(o2IGהj.ڢ}h;4zQmWSqژ7ѡ?2(7lŃVǚ&'-HZ줎U;c݋Ar+-:}C=dF7ê Wê{XQ8 {X9o_LWq0@V1yW{h+d10 *;4CH7&_VJ:k_K[|gUkIZBgLq%dkRl!e%GUt0}1/&j9]cFLZVVJ[d)pY/k7:n[Cdop+U괄͌7he rdq|r=$meCI*aVq F-(#0w@z}n-%vmthl.zrSl(,,6]XM vĖ]^mҝމq@rz?^+RCs]?|-f;><9>cFx'D *F.%{אH!ϯS`9~ CPmJ)ڷuFj 9D$bL2&#&.?َ(?َ#&:k :-y,%TH騂39uV>!\*NK`q^%Zum3'fg+ZMB 1hSM\6m8'&tcW$]BXu;M5@4%1l2Xj2tK9o:R">*AEGډԘ`{wdIă&4"W!^5K\nUUxjL͓3kLMSsn"nz+i˜[rWjdf{zxwdzb!+A'1 K* ƣ.ZUw3Q2zF|-B(hYFjYД3Ik znX/635=|!^>_ z8Z< !f7_f˻wX| SV5% ٶ1)¼JAp $c`*chd/P 4ia (],>‡p{l-9 ^c}P!'1q* Ra-թ@hX! # Fd }lΧLj,Pؤl؋)&y*Q,}m8ԷuG#ik~ug'8yăX1ŒK&ҭ\,@X 3)ȅu.з)@ Z)QV#ymGP; ,H%z#xsKp GU6Qu%E/'xpE .A"z!G%r IWuzA5cp__zgq?М/#ϫyG,_#xU IpS+e?U~8CWSCl Yrq]}{qD(s1A!4kjQPNQXT@r6 )ͱw㘪Gbde KٿЖ7ԀK~ ?d'׆كvp/\㝾>Aܬ~C+?ךE'\yaqsszþ)sĐ'湾 ğoà myǩͩF69/IVve_17G/P\t_|7u`\-4€@/qebB*t!R2lP"&,F}aT*g4st}X˥g}ju,ޛրA@'mK6Pꔡ3=6T|eH569N'" =:g)=/ۖ}=Y2")lmt)djꨚXkl𙒢TR!6xvzY=%ۧZ3x)'(|;W7} ۻ6{Csŀ؁o (!eD0*_eE8ɮ/jKbYZR#m.LubT!4X(j8;}EKMEUkƊUfPYSK ֊yejo`m8ǁs/4&tf齷tƇv q/y|sY+I y% l6`7, ڷ,7i zj6ih߉**~s,|}̴BNC%C 4U0Nl2EG)#lFhU5dfeY50;.U&FJˍH)'u:_v~H3ԁ @K|Cnιվr?'9Vi6puQzU@ I 5!Xb"A[.RvWЄѮ2u©ApUCt8: )U;n9PiA))vq}ď7AӚ\LJsRE^ٿ\:l1P$bkm#9SؑR}x $8E6 [fL$Θ"I+:gr~{өzz4Q:qL%It*-lQ 42=GT rhLľEESຏgN Z8Ysk˭QIB1`r||"bC `I2륾_{֛ТCf75CC(PT۳ \vݜJx/8)hǢ}Kw+ JJIeLI'ɢC4 :R2UPثǫm/U{O7/ͧ/`2O6w Jf >y&Gg++9:K, mb2~?j-+@-k빯&΁m81lVm78a9T7SZ?${|%UrD"]یKctK8* JĨy[y?4. ,zȏ;tOK/tO?O~'l< qt9+MdKk:NOvGBkfrD?6^7n2;5L&|Le^Օq(񤾚IUz%];y['U`$HR̒4:D?q+]W˪cnY*wtڒZLoS[AT[.+ҢVgrPZ"ReD|v du]/g:D,a?Ip!J0@"o-YIcRC Qa6#?JP2 yiܽtgwuuMWqǢH-dHʹA-"G>h`NM2Zug`-xo: y'-cZ9Ryhڨ, yo}) 쏰X(,zNF" Ts2eBGm9U%!iW[] 1LYA SrS7Lɗ?[>C'<)0'[Uy.٘0ΖY1 YX94h\OGBΡQPZ{1A$ NZe:Ҭ3WaotL.G*Il >KWbԢ!z{۾wu~<4uq ]7v,|UMu&ӨLbP6Q]1 DR=Ҵ]Д^[8L-*v(Sz BqpJՎkMk׷C䄑#-[u.(iqW.隺Mʷ#47 0/_Ʊ ϟNv~}EJdĮok=?K{3¤S,{ ̒s) F2 h}lgg,V[‘Y2Y-H&!B H%J Exţx\&N_E:qȶQ͋.`<GNj6wOD4!WB?H eBBS86;Ky|{Clh2Fud( mjB TAFryJ섭\I'vi!91eDKM1lfUR"J&<2-,Đ[Y7*7LB#$Wƃw<vEZ[4dd̒s!W`+[i{tLHi8Kgt:)0.FJNFܔQG`zd2:eon#%>ؗ~ނQFkSjl{LHdRoBӛk̡&,: J³}o73±a#} .!IYF֬g֋-H`74grzop7yQ=ϔJ^MQje9>6>٭t.U A.X3UI|L>K4NЙ [d:G{]׏q$7OۍֆoY|řXK A{nr Vkى T9iS>Pb >^IvA&gx:@G\)bebgk[dgQZ2|S3w@\su FVy$\TpP${6/j.cW[:MQ^-(pcǓ t\!s q+-A2[uVr XN3I7xJcG%Afؿ/ jǶ 6=ԫ ĆPO1δֻVk<Ρ5q<\o$rCЎ𝕌5"{mIv ZK,ڣfR3彗RBhU0qǬ ȌH:,jN#1Щgږȑ,\7EllLtCO@&,iKJv(Y%]_hʪBJdq(YS>y0 sL)HA"+%Šc0ҍޛV>Hc(׳|-yqCd*^Mu Z3_6(MBݠ%MҲRTYfN6$`rNzJ+sHEnK`ZJ .}O'!aqr`BB8!>tjZa0CǷYVw1yՇlL׹TtڏwZ)*N6h2; E(<{?HP\N)PDMzWKJx[hz( 2KH=})1:Ji ̜=_5*Ͱ8 &k ,<*XT2 5e-yylxiBx2oZCQ6FD9  [WB 1'JMf"1JF2-`J9 1nØea-B߶lR뀵1U3ŰNjtJř){ KV:M>3S f>5FfFOď:1..sZgQr(.Ƹ\pq{pΩ *HR"3cDx \v|{FM?Nzq~sǨ-/P 6F2Tt@OhL1&]NQjQ48o]S]{/B6[d{ ɡ:t᷃y)}~O3؛.4s}OzOR*ᮞP] %~dG[oJNv.ixb., 3r^w K1N;, qi&<"LxEl0(O 3Pe$HZ/(gy#LAbQMBi\jPc+.(d%L5Er ]ZfqQ݁t/iw =6rn~}󗒞O'^=> J텔|sꃓy@joRi@u)ިtb&r  TgȊ3d9Ѭ8BؤƞL*A[Oډ*%DB' 1HEJdț6a=r\垯mjs-a>:2/[۽o;t??[.^_]. yNvǿ(|p0coc,ȂL6)*ҀhdJڸ'b$ezpҔS dQF=$UtN)P٩J W53g?ʳ?y˦7^ q[J52Gf]1(˗QRmL?N$ 8a"Q 1d.٘U֘.;B1~}M;PRb2+1Xsp| LeT.vtBвkːu z~+r磼hѦ%gn&2ֺ+"Ⱦ<xt~uly7qƒ4d:1u*_ D4 NI(QSR3ZqHQy*]y\i 8_9^[/hf`'J|KF V66THu]d]MןdeqYS >/hA*eԓHQke7J \WF|W^5 >*UW$Q>J颥d@R)eYTPbBDq!(ژ ֡F{m6%J'c""d'xdH$r{r 0O<}kgoGx?f$Nw,vL颈yYSXF #{/~4=~jۆcnЏ?/YG)) +*=0܋7~o)[#p!)-M0^%ےuaU_|fT7iզ|qdN J bJ iwn6Y~]zlS Z5^ybnzO ԁlTԆ=ӔHM}L HH:yMbR)6E)k9R58qgR׸WHY0Ĵ9{nA2TYrSJ(ޔfux;qק]er5O:7]B5XDY O2FRTrr蔤5A`SՐ.1S1X/+zVby 09x'cȎ@Vnix=w~123p2]ioI+}#`,Ob1Jp}\'q\8xÛ?|wÃ~x/E@E8*/?| ^5hQ?r !o 6ڣ:GGs 8X7>na86 ҥ8ؓ!?*9@ѺjVL|JUoHaFħY˗Xwu }4.߯_VqR9Tx8ɼ}+TٳYߟ}QBo:z t/~S] 1]8~sUncӌqG xhr=e6N?qE 2CHGxP-x oGKG9Of2]0_)N0TwnpGm؃ZKB>wk)*hHQ!ϴt)Nhs3  n-;R''ڨvK? AX% DRsBpA(Lv;[nٚX3܈nÍmgx=Js y@>ͲvY/h|Lϻ $/ uR%ՠ ^U&: vG'5iVt;AgPHxdAQ ܄)Dcb;5Z0Y12^ da4R(w:Eʿ!DK V&Sd3qVhY6YikП+xᾐ]!}9ŲŤul iL`楗['mhxX͋t`T`lyBIeHfAX8oDa [Nd\Z#@Dv-Y'0QI!!1<8 14 $RZ:eoGG}eY]\ެv8?xi"lYTBXBD %vIgHr℥f](|5j^6;PgS}&Y`vKo>|f6`6E?<U΢B^^}:z䝝|8K89h|9IfRR1)E%0(Y~Ju)YG_gf¿We&ދnA""BGv6^[UlX^ڌ !Vfkv]Xԍ7W~ ^-Nx8wP =O9̳{RzYj,.Fgy+h{gKp&&!3p}`#rpWPCpVrg*kv]q% zv3+P. ,r+pn3\e)eW bSqه,9r_,*?r=adP/Ѽ?(M"hprect}/ D2_^ۂ*b> ~j; nmc/q},8/h^pbOR!t$x(Aۨ_a.o0p3"D)3̥PUu´:qEU(xиN!;-=޹2J$F (̊hX$ Ib@,A KH!)p']09F,gaUf; $R!9` .rޝz(bW ,-7ne)XRo=CNjD+Cp6jgJjA2vR>9•Қ!B⊝y[&G)9•V|*($6jS;e+,19ݫeq׋e((mJobݯK|5ծtnYM&k[n~?(?ELrHd*=+#F>FKnFORs0 JH" ؠ->͠n2 y:9<F{uMcq{Y'8/کŏ7o2~eq(*U~e5Τbg*p+BpN3'/d+8/,|>rRHN9.h-D<*b\pJ'Sh߱v&݊Ǽ#-ҡ:@}z-r:['kYd%]VNUU%ZLr0;iv v&kwѤ]K]+7;E3o3E.\Ĥ% 3L3Z%5;gIw[4LjBHΑZ)C>bP!P#N Ȧ$ AkdL;v222]`!  gM23^dX.:`|J;'_]0dH£6&r3Z^HWRry J4LS5OÐ=12lr'!(fJhdP L#Jw؝s7b0,ۂڝ{xH;BrG)0@Ղqij:2$iA-ddhMyT5!/YdK$1&օ$hTGemxؙ8waeԷI30n "v>ED1"{DՋe >xKZHʹH5ʺdH DC-XOڲm5%D*@ʑbHEyPReT=2@q#s[׵nDQ5@8x0Ժ0GpV)T-e%X-J=/=ƝpPև9>0(q{0ǵd>l$l5 kdD)ĤA[C!r!}ca[ϸ=!F7Vmflw4«7g yhYf"t{ӹz| %]Ue}p'w4:v#cpnͪA;[6Z|wͩZ^0nizyWY+\RϦ;6~ >𡸅^g⮥vJ\x`6`r,|6d $3BS` >P1@Cxd Jq&P99!)Gb% m3q Gj+/O 2y'xt[/(+ҩ%~-۳xIlnS%x Pǣ Nh뢐R%T"HL!.* ng4zBO<(!#B9E@;-!D2 AsZIH,!&)`ȳ dn +O{9v^륦i/8h`ER|Kв޻" \Xx'*i~"I7+ejDgW]L^k-`⳺.@<+ps&Bw6a|cc]ؘY8׬$vx~|q{9/2bI*$iPمXf^bÌCQ' KZeH,mݭʬ'2<5Fpt%PQ}i> %H8* ^Y5;I(z3G#8oyf\g.繀.!Y/c o~S՟lٶ/,zlM URJPT5Wx7Nnu{p0G^NFޜZ/pA*Wp<{&}Ft01u#8<#H Wi87Z#?"wbl9rnP'D:5 |2ln7C:%wȹŠEw63g%i)%<_*&^qAWNx6ΣiuKó3&kNzq'5/B. 'KRRvJIP \*vkg95GS7>LjxQ ՟>gL:5Ȝ ~;{< 4M8(:~T:Bc^MC"?w Ė^ mY?7Q4;B6s{_.xJ wDO]6U5Sۛ}Nzp o? sgZ)ˁ%q`L+rd9(pq\dˠE”[W ( Q߹ɮgSu0ϡ cc[1e?2>z>ׅ⁒+ >Dڕt4cݸ cf9kܭ[̂7E-ovK#}J_G񆎙E)9.I [,~fƽw5&\2[_x"yMr*s}nhm<'0s`"l|2cŕN KI6k=8}lxdÔA 'Tk FZqtXF-!VjE/a˵-ɞŖ/~ó5P@V )")m{# 2&8h1DJ \ cFFoIF¥ؗ:jzGvq۪oEィ4X8pjDsjWԙ*OU5gp=q}Ufmh%%&LA/dQ,a@){I9ϙH^w&]n$$dKVih}2kyo6F 0AeЄ̟SZ~X-T7ggbᦁE5gw3vקER1ʊfk/d-0<j<|vy\p_\+:PU4NX62+KjB(R4e0p &rog辐(zަޙmLɣPG(5 9(bpn[CNX B8xh2dSVE#FyR;mTmQ&U$r1eEq$Yl3=E^Fwl:OgpG&e 軐I07Zi{C@O(⤡ZZ<8b́g5X1Vٵ ypT˦PoA]NIzKbJ!X6trs݋W_Wxd_xa]-nf\=1h~ȄC@0m>@YրhFR[MH RћTq}cj┻l/ V&2_|vAE2h#N[8V-&Da#^yRXKR A˳LvFobiFȚPKn/6'9K(vYo7T,3zi=2% /aN(EKSJi%% ,:^-F߶WoͫYo(`0EcB kK\`CüEi ۍVCj<]#"( -J))$m`Ԛ8VhҎ)ki{&>Oj&VOsGЇqMI&6dž3yT8SFK-@OJE)=6T%AW_%)gGYG KH>mT@%E$8*l$0 @$?VP XrB F\Be&emʘou58@;\$ fñ_7/]8M8_r4n~"m $W2OH$K9?s_''X W! H˜21T@/kcm m *$X`T9F$㑅H>HԔ1тFQ4`pHn~&Ζ)sck\9?|N'ܛYÖpvpzh~gΧ25[n]/o/-Am(p$r(pǭ!p0g {\& ZrjӘG"h vBleF͝QFG ĀRELܚ8kJx8us߬~ "{gyG%8Z _Uіmy "fx+MľĈ`I[V]f/v`޲V2L 䩪QJ$ AZ`o6Wx^hyknMdBËr)Sײ1IJDHjcR%d̬a9 HJ/ e>zLX#2K.㏇_pl], yH/9IEd_%H4N+!ʓH&. ᮪8ԕHV+6ߊT;YRn^`ӺN2$+~"z'soH{շX)Ṭ0X"/%5ߪ++ͦEhNw+yV(>x]@gi]fYf|fF]gRY~ں8Ahy uSEB6UKg:z.VtR]$Y!meR3n g71n+*%Ӯ^:7~W+$ w\gw0dޤe:y cuɼ[I$0[Uu雐uZf8e|:rg>@@ΓL_<^l[*t¡EOct)zoܨŏ-#A! k`k?6*4qbZcnmJf\t +|Dž='֖{+GJKv xPύJO4 Τ!%wȭ&p}iewg*6&K{N]xN<+`!nxF"zlՒc ˇ};Ip%X,RUZ8RFէ"@}CI wtC09 {%Bc)U H;@Aba!XjTF+N#ѴЧ!cpcp3N{[B7z@)a ^2XlMbm۩lDv/<dpz i^HKG~}.mǡ{ɝ[ m<1zGxU#qxunP., 6,D9A(%^ +m_>eR y#N3|7d9s c7dw} +h#az,a3N! yobRJ%l*KEB/y23GrB3!㞛yW:_V~[aGG^> ZCs )r )+Yo ;03Yn|QWYX@iګٷhC~OUvQfug7fE( X`L0 ؝|ZYǻ}vH}[HV:[wC]+nǮz*v_9mNa{5#ϚOJ_Gv-=SW߿{5? oOx_ S~89('?/#ŕ8?5i]||]$CS&f|-qBo:^/6\Ņ^:Uv8|?>|>n 8Æ]`,>˅Z~:9}\\|ӏIO޻)kw{S޾@'N.ϮDݲgWx鉬qpJSCT3O2mjZsWٔ}U{SQ':]DtI1M*絯UTG `JĬC5=e&Sn#%'x;ֶYR Z{]9Sˉ&r-% 73u!UtEy9lY%"’JVEbUuLYg1Zű$smN{rdb믌lg>B:N[㣃yE*nUWB gSA]~bGբ\+EPJ>inpZAnVXIZ8Q1l4qִ;б-QT|-GҬiRM۠*x|Kwl[J71MoE@N?z(9kDYjQ:J Њ/R\i{5"G8i'9Q(*x R+t($R #_-㊿{@wVM's&^MUљmM52D'_"*z gm|u{koG=} }-7lj|2-kz1*#̈M|3*QyD1#Lc<"c|ri[c4 :^&yw9Mǫnn.nj̈DϜ[&$s'7>M#r˔GJIa DM]6M]J*fOI-ioљ'V,EKފ  תW[2`iPUL)b` ta܎tB!`,Vl͠v3m2PL\%t{aj%ʍǓ =GxRg)eo&$P߽12㓏9j]W xm4g~;A$=>X޹Q4*ٸU鮷|`[7-3gbZFgZ.Z%'?>wcE%QHN#Y}^=Qqqz*>Wuq6ɝOKǞ'+>ji{6j7VYWv❌[x| =1aɷ R)-m&lL^-bdWRQzIv\\-j}Nfo0>yF+Pi L..]:ݥa۫Nw)W~U2&;Engwiɇf=\bUXv_> SmpեvW/F{󥖱*Orow>B_'XviygVjK 0\U].o"CJ WzK];Wzv=\{ +$(U?;ѵ߳HN~=xxtG $Z渷7AgH?i̇x9ࡃLz(qfU˜ %_d>I\U XU7HT-#@Q]Hbn{ eSqit/=)yjMObr`3@ 5j&9JNLNR`gT] H.S4zYEֿ|Zq9 .g9ZǞvuEKs).G[//5WWiCn*hMo%o',Ee Z1u$t2 SH1bQmwl)@Q-ͫbX;&eL5s3.¾6UWԁBogM%,ג1Ș=(-!ݘZcP-JP`L*0JL+[`UԊ *r>jHZ6*"f+D&kı*BM)G8.=$BqƌelZ;6c6]ܘuk%1q欪S]h2s`>O=e`T<ڄTca[*ý3WڡtԔRRa,!? w")W,4ʚXZrh>wYaںz( >8h!?Bf]FlP7zqzs=d%^b&H[v!AP2h!j <>չ>n@ݪ2yUScḴ6o-T+ɑ V[5D5r-PZQo!C۷9m\ DnqRQ"ln_G̵ְ| -8$m@hivLaǨRX92lr6ª(1US*6f| z6m|Z\,*PL3KUX !x»D ~;a*V_&| F3ռ*aŲ SX E)KXV4XT`tT-;;p.5p;g09(GU0-XtպJJ k=/UX z#<%EPiD2 +C K\+{0,,{Ѹ8{*@%AY1vkrx8  tLtKXg5@WawVU8&KΌf(N C{kz1j !98]%_>$iXݒi!E)v5%lLjAQ ~@9N :^@$K;i`G6hLҒo-b\D xdI*hΨ 1Bayh$j** @gUSp!fDp`k>OtGX;,ٻ6dW{;V$7l7B?-)}OI$EI#* lr8S]Su b,dJ 2x!WrVkzˬߋ+FaZzt$Lz2EBfxtcPc1sC0ҘWr|vjmq4, k03 g  0Z`8x3 =06:#S۶{KQށ!¬[<5HUfƸ@Al }NaA@F Dͮ R,uP)H.P]pwE@z3#.ī'Bqꉖb gL w@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 |@Rr=%&lHaN=t&гd)D"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b=_&<%&@{2L VɽgZ92,\@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 |@K7[KMZ\?+w*%u4:NAXK Od8'C\iq KρtuFp~clj _@.w/.|xp:P16w9Km'G(h17qHy\c|`x {K8NF\CZ߽`0lܓYU"!rVHȲPƕ뜼=g-0G3 lep?OOoטdxx>'ǽga|ht>C:Q1vʈ~[5zIk?_hobm?;]qwLB$X^f[%8ζ6NAtts:X峅HF}؝'eY$,y?/q` ܛOJJ7R%P"h=_:):ʭ}tYwǛopC|3q͗'l zIfy(hx>ܒn5&bgQ;_G)OciLY)BѦ߼mu*|~3<n/Hg|BX֙RFDYn of)>88/3r%ds)>YMpJ1h<\7lC/_;^>pyG&7鿧B_'|\ܧbY?\hXڥu9X ?k#^Vdq!`Օ+ $:CTBJ0}U4өJH~&6#dk(c,)梉Ds.*"VK&B̝qĹ3qn6P7`h+׃vrv+Ë7/6vMwGɛ_ًܸ|U?ٽOx,ub(>Ub农&Z!o<X޴֡кjB4!%ƿި1^~WGW?=룷?![/!$p?,V,/E}M3e-D~{pYW-Ѣ˷ .U]羹\ǽ_9}z} K鏦N"9?RHe+zڏl̦'(,Tk߶ 'mI{NyxK!K@0*?.}H;.9Hۡ'L!`3% GXVWdM(N';29?asvš3v^o<ʷ γ:ggz^s奱g )nnciȘ|L>uQZ0}tMO[fh4qL:I=瘰}(?cU^K1:C9 9qU/Du;-X5Nb}:s3X=vfyk599vd=c^+ۨUTXt#aWܜݣ$:C|W|â 'ӦP;[OI+g3P&p%Mte1^*bZB:E,v̟m;؎z^jyE< /OGq4SNqUxF-qPQD-\3iRU;\uXgL,/;]liߢtdi[ɢV=K fkUPzaҵ:qLKVwӏ5zLJΓ&W8&B-f dKf>VWM:Zi2<+ ^E9a DHHpL( gkl r_/jB8:q{֩3F:mx0-R`Y$ aK/JmJDVX˪$Q&!m1dmݭlu"w5>ױ͏VTZdr\d 8&6 Zg ryR f62DF^<3k˾XGgrW[c ) bFpI^.Z%@ Tem-M5]v}JR kL1RB&Fa*YHJR:+ox^lz 2*I3'hnc@QE(\0l,K(Nui%bb?tSAD|xh  ػ7ndWJ.6,vfMCaQ(%6߷j[Pc͸[V)v5ux]eG@rk%UDSO'a|u^+!&k,AXGg#$0hrT VI.^pfeWNAzM9MQEKIƫcG(n(qlMVR2c¥OUEn`W+xXtD敱!r},:VLXĬ&{JT")sz:fdN!6I Ęɜ'S 3S%#dVe ^2S^Hf,xq^z(7(7oہaہR9aZ= xQ6AД74oE;w@1v $6 4:sl@(zǂV@ (ϊcUwD AZea ;ːq"c"􎀦8W&Ύaq@ɶoOo+Ůaoay[<}֒|n:< '_GOn[Ilg-[VݻLmk_Qջ](ڼTi)v3e ]^3\8ԺͲi8-3q4eC-nyw2vBa6louvv~ϻ8fg{XtK]e&=r ճU] ⛻]U)8os ʄfӗ`Z]z3栬5{y@5p<,4VAMzk$U*5d#B ބTO_-=t5,FbhmN2xQ !t\Aɂc\+~4}Ճs^qSפKӵX~>+$|sZbV}&blϯa|ZR ?jS`L.DL#~g%cF딦v_s7Wn h:` %9wj xdIsZ;9bۙog@!U&4wyp ˼ ' GX*h1e_:R+[q9& .Vw˷^D)RI[P1څkCY[h)ήb~4?oH4:-Մч TlA:I4Vj$RBN_e2sUfA̩BvR[Ev: mDd2r&{HY`H\BPkf[¼)oSoY q_$\iZ/wٮb~&)xLKL|vxW f*OB6(rBœ@5s YϞ~PsBYqk1CH "l)(H|I儍A"0 9{yvs(R[f]h6{/]vV6[ӷ"x6f}AòMW~̏*FNA .K_^[&M GV$V.6Ax{D "Mz` xD.Q‹$5Px,E>9b 2L R5)p3oDQٲ3>Uu4yS[S3lْHDg`Vs3L{-:AbAYPim3& sJƂꁧLfCZ!QR&$Y:J-l<%%!HyNe qfbc O>ʼs%M^A׀gFr+lT:F3ذ4_/^D,c([$ugy<)k}=?X~^ktvWQ82MO"E-a. 5, BVK5@(pztO[\<*gG3A%H"x 4?*TF'S)Q9]c w7oO)YGdy:9_V锏:?_/G/f|5?5{}秿.Ͽc̗?LuٙIK^ք!5m>2~\ g7fu~>M8|?ˍ/93gj$405xG?]2Ig_ qR4hK+178.1 |3ѻe 0y3$Y^ԮLM~ooUVdma%k{镘Qyh8Y G'x񯓟pAM:;"-s#b  /Yƒ+SZO;T9-3''4xbѠ i_Հ F RRMBX໓Y; 7_ɻA,=÷kN~Hџn~9 q}V(7k^>hsʟ,BW4%L46JY׎/ߝ&H"Pp"u5-W PjR3jvSޮDz2͗.k|o.vs̅"D͐~綛=3.Go^xv߸ ֳD2Kgs2ps&;%'b)3q1{ٻ޶,Wcmh! ]$`1=tKOF=c%'AEJ2mK%Nb֩{.oZdD~$޴5- Gٻ}ƧqBL4hcs& h1D] c{k5/l~Ֆ-=ZkbK}۰_zr{B)~OWiqH5e؅Um:WZ[I95! )<̄1n^fj"dܲYm՞X6o~6a^.*[:rΨ(5 9 8CyQϔα&H))HXGOb LIxpZFvp-Gg yH>U '$CR~3n=_zx4co>ZUCX#GTj F`.,+ 21`<u~*ƨrRg-6**b)7{rIo :@΋K I"H(wxd!)2H"R<1s,PS劣YXЃJ4q8eD' j}#HXA:(M;0/fwqRO֍EѻeW2έ |גlwۮAsum9fUřxyeG-fWif6|p*h^v7{N3\Դ=4[L9s'J*;DWX!Jp ]%JzztEDt0vJpEg*UtPj++}m]%Rt\BW֋&Y{zMt%8jzfKv7rayt9:buejXHNXíi~M6T3Oss{ 10 <4}̞4WWn4= GeCQiYaNncP)H,fceqxZ :+V+yp9Tw;91aB.&lP^~e,:u_;4 ▘ raha:]c// 88tJ[;Еj߮nj Ɲ+[_*=֎WHWqt*VݡWˮ%ʶ-/E2F;:UBt Pښ^]tW=ڛP^ J ZPCq 0σ|˿ T7鷪e(\:lWYJ'4O_![vQP ~SM'oﰵG|)[AUȪ{ZD -OV'w~JfEEm8;0Ѱ@T-yc=Ovt&={^3 /m$tKSOgw_SCSp¸VQ0($< 2՞c:FY^1Tk%i`>_YH}ar8CX>U)Leym hm~:YM#_eэ&I0yg3(o0.8{ݫ!c>)UMƻ^Ob_-c4y#`A`>5PQ3'+qy4P{a olFOM@?3Qr({>\d%}SL.  0OuH,dgbpv?TN',W ީ=/T>=V;}gO(10h""RTˍÌrhb*򫋙" (Q+s z,AUDҠ%q:aVNLoe_6=ﮦ=²])G7pPƄpJ ibH 4A4C n@Z{]izN!D!e!x0k5 D佖.&Z iب :_#P|o5J& U@ө[]+ *& 0,1a`2^5AD-dcbNa>xuԝ8Ֆkuٻ}qmTD 0cJG#NF꜏0~F;6:PCRF@.(' qfaHB9UaaR"y|IFSPaLor79p|n $ 㽈܉uG1˫2!z>ـO9 7QI{ݷ9;6_#5%1)5'PrV~À'WD`T2%(#/FKg:"eFd>Z :@q飶FW12#xS93Hifl ؜6f{r!Crᢾ WRfT,Z~8^mqS٨&Fb36!8&8"5-Ɵ 17C;4JVCilcHbxDTH'KxP.GH0c7qƶI̶vcѳvvRP8fȁ0HTx)*a"G•F5F.qVq@!gjG9AQN{@TiL|0}HFl>eD0#{FF{"ksyQk-Ey #hYWQF #$$m wb3[XA`Ir$0#6qFv-b/WbMYɾHEbϋ[#Z,b1bRJyǐ 0; $X1.p iϋbkMYǾ|C>UpYΑF;|U̽-kOE^Z#yO742g Ip}%D?shTiy0(h0#r+4&}Ls\Qzbϔ \CDJ(N 9y͸AptcۓbX~Y׵3iӫngZ1qaW&9B۫N^`.&om Cs(]?Ϋ۫9.tUf YV%s(=mAͷ|T 5JndYoRbȝṨfWk^` oIX›O^5 h@=|}mPdW,+W5QP^ԫ^YLxNY*1Lp/"u`)B'E#[ D ]?k*v)a\F-'N)$)0$!F577㛔3ilΕ3k3VϜN=AY7>Czܙ)Q/~{o ?ċΟxP]:"kSOf( ެ ܱ¶*C3mW|oS;GGUz~ oA:co1sd&EHEfRo^7'ĭ9(~f *j^Z&GP3a社m6.rZW=ȿ>)3*WVOOEdR'N}3)pi|u&N_޽xA-!RhJc( AG8I1 8(֤j#2=s-ΟUI"TbiǏo:r%>NKJΘxןt{tT'A,~e֜uK5FiiG+]kbߛm[_p.vs=m߹3U:QMl^su] 2_F4 䫷Yos25 X2+!|WYm(MlN؀DEeP̨dIR1p] C]R .jkmpZ5h(cq{c8IQAdii-M` !jc⬹+paMwCHu&|%*tVՌ^&tO-N*jI`>hQZ`W`:+)Rh6[Ozch9 Ay!46i7ĥ3 2h4]z1qn /*.)̀HDm!LtChKU< [ʲ eA5G&Ziv;B S!`xB`ʺ@8(Ck_#/~Jt+ufJ afÔt^wqT]3T+3m`sF[͈e3kbj2qP<8j|Ãu-_A&+?w7d`V!.k4f(N\yIܹDtҜ8NP/*(Yn1VYHg}_=Աg.TZZٙ׽Q?o`>PŰ:L6l)Dz@xE칠=*p6_.De.|o.tQWn/J KBQNF^jૂ-iL,RZZ-EaNFmަ@I/gRo$mgsz-R6"?ڄwCi ƒFBMG-|NLH p"$JO5z\N ^,\ͷt6A~X8 ^a'+QQ;sxEwdc?_ eٗn0nbk˪+oj2f0dU.&kJYFi,xO6TP3pY?"dc$,PHB[iZBAȘmq BF#kAC@ k< *+Џ1B7HFxS7GXX|.vGĊa9D7EF ;BFpQ")x  "NY ϮRx`lϷϧM\A`ZkS0]m7T єjԈP2kzNwS}8K\Yי^4=4Xgċ7^ JFm#U ?M4#L T voOFiV*1ڢ :j n)rNM-W/ZEjV6"<ʙ`%}  #s,XNM!&@ٜ z9Iά+G78{Pb)Ls8}7O߿xɳ7_<]PB?V9|/"G"ﻮ>8gзRӿ~Ӓ447oNӬɷu:&_yE/.kv+4 ;[Lgiw\ğ*#u-NNՇ淠i0:vE=bTۚPPI57u8ŊjvP'ayuxv`,4]ܙN@֭p}𾎟Wq^աMY) %cDW>SE^BU'+tjFR¦OW\;GC5J22א]جdbG/ouҼ$_"1=#qNUynmQRs\}/YtK>äQ}q}Wj*6${/,WsDd=] Tx!/3t0!z~CP82[Ea[b[{KQH5ػ (" GhTzObB*zKwvcҞhP ~b[|P@:-,$LZNhҶ_wS`bΦx~/9=K`UK1 <,^56|M>5nd1S[MHUțMCK{;If'J ;iH4NH +gR7Eb *1F=Y# ,fsjL0QE7#-8Qϸg="bNY&aؘ8G.M!p΁>>WG u!J}I>~eloӝaк(05,i=Uk %.)=׏RB>t$NB^BB4syĵ($s)nJ@ݨǶd7y Y τB2!.%6Lk{vT|(eyh i!Sf_Q!Ad U @g Y3?.l-l!nWVX-g;euǹPqJKQ 1%Aζ7He/#n-X|`p1>90Y+m_UH;+NkzԤFbia]dYP5h4vmjS 릔]z:<,Du/3/Τ]wS3??o)sy(9ΟvoߺS<}э%zUI0[Q,OZ\CglH 鯫IS4ov儚WP=ؖHkH 9֚>9>[bzw? ZC[(ػUzW%Wz&8C +>Z=l8v/{1τf*L&lÑvs}^0E;oM:*$ ]An"]M vMhkMю6I9te |~x~0y(-S0Tɧ0<t2tS55Qk%/Qt߃xrC|\՜]<-jp2HH]BBI6)Ģ#u/sn8qH[%W܁N`7  h_"o~O_o{*?7W>?\~wv, %-ai[ *dWVMl٢C?Fp琱qtf9z/ D l&փg[?L(y1/k.g~↍_KçuV_~ɏ緧gwFtv1z-o~Sv]p%(VޏLwWWz{j_`aNErwīG^(lÁWXiU=rfFխ*tecׄ"U24gYqUD[^ *O/fzzj$\;ufMʘӛ~]m"Q_ª&j,,sV2Փ.Z ʦ?x =v} Z*x:zUj83g{ )IOx%g>f8[3GssM?!&_<m!Xy#ov06&> v0%Pr;2#] * rѕ+WtbtvJJ8]W5̺kܼEKJϣq]zpO+ ѕzx*Z>TW FW(] \QTu=*#]10@>\Mt%&ɠP2\g}rAe+EEWB)u] %颫%*\OmVlr9Y}1$mzJZO׫˛[;6l>c*NȜ9VA?6pk|P54u}5gg銷17uj1X]^' ^rf+n?M/${n+W׿}ɫkTMxIG3?}7;}M@Q e7n LPayq zNr&] nfô}JS9Kԕ ^[HW LѕEWB0u] V\rt9銁 ] UJh}^+CK۹:WWh0Q֮]zPe+v*] wi J(]-PW0#] 0lt%rѕZҹJ6jd] 7)`3ҕcFWѕPt@]k3r'N=#559L]0x<M~ZʛoɄ՞ua[3^XwV^\^ߜSnuަ[rҟG#OU7}$oj"P_QB՘&H;lZEW [jԨ&>ש5zh;ƒ MEuUBHDn; 9}C9mӰ"VUWgjĀT}ͳ>vw&ϱ?LNpQ2ZLne&E@“FWkL.ZJ(}IZ饰?yp'ǓAZc=4Xt@]yښt%!)%c=EL]WBuu8JW ,DWBKɧ -IS/GWa[*Qͽ> )54(J]z:HW ѕEWBK*u] ejkWEWϢ+$0f+W<ѕL]WBimu0w5lt%+u:u] eEW (J)] g덣ɯ]1%` .QWFڪw҆&Z)B;Vdȣ膿 4L㦓 Vdtb Q 6Ap=70mPz %nXd` :3/kQh0-O]WLie.Za[*m6b\EW(y]1eD]y=;g2(:ɠВI]WBi\u,>G>t`g2(L&Q$[EWgw;zT]F=ݑ4o-΢U<@-zʭq@wp#ig*.<2GWЩتlt%+=xZJMEW ԕv96M+6:] \t%.ɠP%ꊜwd+HWk}.Z2([{'?CDcYm/s`ɫLj،꫉yBHN.'#͸l ʮ͸tQnrFPXn̈]N O lf&Ǹ\frL]v}.3gC44%.'=u%+YrHW7ua:J(CuYV&JCFWk ,ŐTu/"攒..Jp}6)N2(P^`[*8;=qVI;S=& F zQ`CFBϐ+5>+x]="sҕ{FW`Jꊔ3#`6ܙuGWBY&ԕQF 'dYel͘1#M l4͸N\4-R״PnZ4 Mu$WOsW+T%]OjD|O&iڐ)>(e\=+#1q@.bZR+Z]-GWH|>b\C.]1ɯ3Sj鱞2zǸ`m9L]-PWt@] UJh42Jx1OUL0j5vf]u4ц6tXtGJ]=փqڹtu6\crѕЦ+4++~KrZJ]WL X%J[ 0ؚlt%.JhCbJJtD]Zu~i/߼F\/_ۛ˫?_w?5nPnGw'˾럏VnG'\ǖW+_}6M|?wwwK`wD=^ޞwv:7>heY}r!GѦ3]b%9ܚ]D}CM|uЊnGCk ~.6_GN!{q.__E]ٍ4zQ꽗6߀17ʻ~7 5_xZ[槷agM6=%"l _}~,?K0XRj K&JP2\bf:@+Qϟ;ʂ7nvq`'N8qPJAq :)Ic6aMh=6e%lX``m0,G]FW}.bZJ(S \t,r銁-Q6Z:+?@cJoUuQX|)}p(ZqdٿBi/ vv}}:g,ĻS$Eibe˝XX"$OU:ܮ:PBWOўW%KՀ'sj(# ]=E .S=M9Ɠ)c/_bu{/XB^j{8 g~ݣ1݋}5~zS6umD|s|Awvs`9Ex]ӎa(4|o]2ۙo7́LcYo`ޙǏxd=~ݻwpǃQzr}$վ5#Bfvz|ǝxDž##PQżܢps~@8 E y?|cf{5z#/&;n ]AnSm?ؔGd ~ug9d8P7du5YّMgr8]HR~|{=@2x a+Wo_*m__~sj:IFngUɒ>IYbmQ:sgfMNuhj^ u*D*Uf>\Md:{U]*3s!4j~Xhhϝ\}" TĘcl2}֖[-hsbtE1he%RkPTA*Fr-6hFkUSmv>7](FO zsI5K[[j6|Jk幚n kKJ66ͩ)cĜDh=t,&34"r1 Eǚ B%ލ>=|z:d]=b6J\GhvO~o@B4P&sPaf4ޅK@cVYJ{'mv( Q!(}tB q:]жT?7YRѦXBK{qڃcFgdɺ3|}asyshZLM%>Vs!+F[uAu@r=꜌%xuq鴰$mGZtj)hK si '10 46G$Xr!xQU=%#Bb !j36zm\"FUy,v͋JtAQ]|=u50sS"0f;Dr΂U.E{V+ RAvT4JkCvYwiJ2U(_ "́ _ \,ܢ ^ TTlPtA[w Z 4@sxC9j#XQ_4 ('έZ<ĪuѕA[]I5!1JP\ZlhI0ճk.kej0O.FZ`HJƬC6ƭ0*T)JvdZ[E(L)s` `; n+)jWH JjV\M `6+(\ed QAq)t iWPNJXvl&!̂ - %juefH57]\?,1u6 DaK@ mu!n2֜A΂ŜE' sG !.A 7)ؗGC%::P%@ y `bBQ AP{s e*3[(@Hq`Q :%@uPyP~\8e:)P!JrD RGU!ˈc*Jrk=/^,J茸pQ4I"iyM,CC6fd8=hY{'&A-H#38hGԇ*YȩՏoTC}^iygUcۆj¨ZaBTDV3, MGd4YICxnQڀJVڎx/GT,XH *@/^j% CjGjh\1Gy>.S,|n|sюFe츭t%0f54zRTZ[`:PIlf1hmTXkM!jKBq4RVO ]j31mrNh+:̈=XupPaRDluH5\Qrc U8(=-:)JjB `%(HȦČ,mAz|a( =ZT}fQ ƮXBďݟbDtvͨknc0n=I bX0j) cFYTPcD$΍-Q;L ;A. lU?<6*RU Qc@s@wD37+WjcҢZ4AĦUJmX3i^'64T lFB.dP?)'{֑F_T赛8|›ւ ֫LUY[Tڠ@%ab@KwfgG؛X}s vM)oAn.Yz=vtվIo\5L]lϷo2>v8[zOz^^x"D 7o_ku=>iq?EV?[>ۺ ބ4mW?9ڸkvHxOeT U3Xo[띎;> 1_[m > -!'Ч N 'Ѓ@^H@Z'4@_7qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N'2)9wAbN d@c>z'P(N";'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q ;O,8q-=~'P2): 4H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 t@ħhO rIqQ  @8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@O z{{ٷchn{}Z_͛g@.q Y1. h=zdNKOׁOkp<;.o {,t5j4A ҕvS؝pɨ6NWeztEsEv'DWp<u'tѝc+j8w3~؃8€nu oBygY ٭[x~}X0`){fմWoVWЊ\cpﱓ6"|rϝro^ |kڅ:]Լy7.7cY\ЮW js C~v2xjUy~q\y>{t5|cMv-?| t+ܤI㥠^A8痗xtX7,Rgxv7(Ob}rH?'b wXfnyjڏlfm??'d!#!D>`܎k{>qQq6ǩM-|(9/\>9r3FI)BL%lQ jN`wG?JnT*6>e=FI%+9+F ] t2ZϓPʉHW]mo#9r+ dfŷ$\p͗}cad'ɞ zlK$q/f^mRJBMt],YN*XTiFН?3nNU"4O]}9WUv3k^FWEJ=\A*(:WE`-;WE\W$l?\)pr)8\;Į3EZcW$%2՛+׶? ǟJsI_`zٵLZZwtevo.n.^\oȘ;͔Ur+tFV*rXNs9*mZļD}><(/볦ya=y*((QNArLY;, e^I̠T%1y0@`}/xVGĹ^Ŭ?~׫Fb|یz.qH߯{ _؟߽bmg^[zFtW۴.?^jlv}Ś> šOu%"O0J3Gvwg @}O4khaH:!pX I#RelZ8cUNoyB7"un޽ag1#p.3?n=!W<'{~;ʏzl^<#ꙕeLϾ fg7zh]7*_N-\e~ؼǶR kڤ]+͙XWhpǝ;G< 0XfYDHomTDlT҃;o{ja(\(XOwOj0KrԬ?룎ߏnÍeZa|35+O g_.lFe )FAzo7'~7;<wxkc0U%nsx_ŵܬ׷{/ii ۾{,vdsOZa6OZ'#gWSc4B]#ټ H,ρh{h8ivsLغF) Et^#yQ}lgQ>4գ 3AX-4{HfY@LGaց Tom͈l7DVӶ_c*r񙆍,o *q1r &ڥ8A8/dɼ ٨=<3k[1 9:la$ʹ /`"*REEK7][}.}^ o4e\5&%eR15lRY4^lPH *O#aF1Ws]eNxg.>$ kOHQ$hi!֣g94!dD^:Q)z+Fyj`ْS,0rW: MCrL&!Psz[I$ġaژ8O0@d8_E=tqj.X9$1O)dpVgsm擺!%ZZbBV0EYHۈ1M -%-aB}"׍L}9\ScҽgN ѻ~#0q+tsFaRʖi8#Hg8Uy2.Ε +ň5? o"\ٶU"&OQ#xv&+=M|OFl]|lQ|l(>yKhJ5"2=ˌr2CL*,XT@T71kѥ KI 3ހM)zcp[ g/YU RFc$[0֓|ݝnEhlY3/ Tfi˙UY@!l.BBOF(u"(/ךcS챻Ӥ^j"/:M>WGpgiczrSSql h6z=lodV1wYlI7=wTɸ/0Y.D>&q3!d-[ s<9ֹFߜvx'70D~yqM qSNK0e^I̠T%ұr`>@-hL5o-m_{s~^&,~=xS3L@g|=l;8JF$,G'ih (##h4 uɦ_b0@ Z%0 #ϒbwl Og?Cil,ѥNŞ|`tJ 9BI6C |T<5m*g tR=Gb6ˬIA5ϛ,,Es_jAƷjͷ*D_]8(dr?_7{w,ωMۆJ5 j<|ze=% Y;!w6%ԩkW\UZ?р\YtE/ySsOT*ق;[JVTrQw|9tJևxy+y.*QwO@րޑs_7qjyʼDYwg?75IX4(Q0s,]:bv}s)ӷ$a|r;gZ{۸_Yt/'4I0Hi%]rEd˖l^JAȻr gh%t.0B(•@d p: ZGdB-#J:1M]w tL(03K-hsw\"jO8KNPXȹ]2qC *C@UnQ^FK|F\5O5>kmwW(UEJ=O.&`"HV^# q&2Ǝy[aF{%UO;ܼMeHmz:SwxE]EF{o~xWϴt8)`(Zj@K'U%3;JWx%zUcʪwm;jd& <4s@ j}oI x1(fĥNt[$)}No*@}kJVZ#r% %7҅/^Nqe8-=>p%{Ű~T{R}A+uGՄq2҂Ce2W&=:JÕ+K(:U{FNԹ}- riO+&SJ&b$܋HX2ʡĢ؊\лޠi.%LˌNk}1vȁ;@ಮl畏Nż'"lD.Olg7_zH úO4Di;oOo`Ubn8de2m.ROnOdKRB<3p#F{WYPHAc"zy┢@bY!ZL2hT80:1~u1~L53Sx/c]c{i_!T!~ySxjs:fð?/s~c7BT#L\JbܠIc$ow/>T{Ư!xH^\ï'(ŵ7O|w4(B;JYWaЏY2qk晃1a q_3׍d!{1:#-}z)Iazz~*j&^i}c4867_G :zr9,BPD)H:zI9W~U?'XzVE܊PQ}(?.h'ʕOBP:-28cRpI5 {gՆ87ܪ޷SZs2#ndotY£ PJEapY ldz?nZw-lR4(֙p+h_7Vʛa Ky2!~?-ZqTcI#rxB̔-@W(/Y.(F¶/FTL8pv.tT0kϦLdiӨK֯nL-b\ j2IM!E U-ʗ,m`;mwiL"[X j.noW}w}Tͷq$04W(J0܋pH+p0Д)A /:m`s& A<8BhXg!D)oKg dQ-3Hk\A.c]b#GqB@ d Qy&A'$6-kjkm%DKO{@+R&&R:ZjރW^ᅼ?>(}tL{8i^ܶ.ԃZ ]+H/KշW[kQ?y5[|2·{/5h^GMmKݣ3KRs{˓7zwCa.ޗR.JD9S Z}Xd@3Z2Ü%oI͎ ^n3JWYNj mEn;h6_9n  u4q`)DHFj7~wH CY(S(^b0 Uǣ8,f.QCEmJwϚ#k(Ӿ.:Zh Ah˦Wv 28h* YO%%zOmum' Ph@3pUj-2h1KE](E$tBɭKZŴ  d4!;ȸ-@ZPbP CZ#OJFP(Ў1B7HFxSȷXJn.0-+OtsttlMuNȌ7gމ0SD,5hFc< zwjP\xvNbZ "̶_,yBג a 7T єj2srNwU~ؤ@ޝmgmW9M 8" CAsMrᧉ&`VJ`-{YiO48-*Re7[ d9D""SeEn&ekA0^S۾p(tFt9,hNM!&@ڜ z=I}dn o"x y?>_o^>e_O=/(3{?~_ #r[z,rE<ulx~׽߶>lko57bYo 66_yCk~+mr",9ϙ>=1.?/{U~89 X|Nq}ڟyZ,R->zkbE5/^/0^|V|25Nk;5(޺Nq \8:@u} Oج>>vAtMB;F~H|U-:!^\W#6^N7fdWȅg9T-[fZOtۥuZPj_'ͻ5SfYf7|bXhاr7/{'e8j&mP.{_XܫF}N2){c1Hj%MONVhS<B!qdc\zZbj-IO!k`*{(0|z7W_j7+-2umJ!O?u DJl!Y ׆1/=l@ Ǝd>Ռ[AB-QJ7,FZK5!z.%@'`vZ9Z.P7 ̯NU}&=fnٮ/cr!qr'R.O\V!W*ԢzBV.BrYWWgЯU!מ"jUR^Ba B9b\kNE]U^L[5 v4SHat)`n~ÚTq2_GLb'FFF|hI~Z#;#|k *ww>oPlʛq?Qֳh.rx6oA枀-hr(J8{eǟpch9 -$rZNBIh9 -'aqqqB'D0=BRrP5(TZl'沥P5PBRZ UKj)T-PBRjj&[ol&[ԫl[o7M~d7M~d7M~d7M~d7M~d7M Qsg-mRH**֩H9{K+X r܂J@Bz6oI] Gb #>FOF襳H i/ms V}e L ͽ Vh"BrL&!D<@eWAҼ× W'JPp0@œr>sBMZ zo_C&)Ñm0:%ef%y]6DI; !2hdzlB}2&P|R{u&2>b>Efٹly.y++ўʃ Sƴр 0Q?^1.r2FE`c=dl@26MX"o[^uF^ο hk jrjQ3U݁EYw?1 g׹7Ph'b3Cdɸ+%j%NWr\EO>yfHF[v*!6)x قLWS H,u soAe61pO9Q1B1`FE+=hɝ7W9{T9}Dw gQfY60Z͇+니>96tBѾ ѐS_fͬlt{~qtsoX vx?u##=Ɠt9qeǼ!>Zhtnzɽ{:F-|m~t9Uh6ƻX`?vf$yPZ]w<-uZךsknzmH념:ҺZѥey|j9f'RLzs;d%:Rd8:>/U*4zPpXOL=Zw@364hVy ^+ny2Ց>3u#hK#BD̖{.pfV: }EPLǛ5ɉ9G{K}> M0/c(XҸIJJ93^&!X1V2My3;3"M5f893*rf$a& ٠dJs>yM ܺ|NA# Z*y렌ֈrtʐVH'-csZ%ztJ @ofd'2c ݷ!g@\i3m$-*AfII|YQYΪ ΡbQPslKcIZ-\Zdn™>F "#ʜkD|V/b  @Phqa$)~ra:Ef}6YܠVENjv3= @Bu*ہ#{vHΐHR#ټ mH-ϡ$](G12tx)C#}i< ;`Bg j \褚miiJ =UcCTe6>]s`8Hv)b Dkt2i əDpF!5'+0օ՗HG59T:le鰯 z3+kpwrhHCҮ=((ˀlB&kTRrT^c."!82CrCL*,XTwdbF+YUSr))7ހM)zBY>G> 3jGr#JaBe[=ws=b=W@)FlJ.y.J"I~GU%6bO],gVyP$ْ%br<@HIH4RreZ3jלEfٴ/5B4MY ~1NɁƩlC ^^O]Z,v[;;kHv'v2[m~*AU;x,"kQy"?2Q&7V#їtEtt~×^rǗ(ffJ.9:#; ^ze[}P#Fet'^ =2q! C:/^;Sx 9 lpy%12R,JJbw^;hթkwcHnPb=A__0MogdMi'/fcy8_0cxWvnΦ0h4 uɦ_b0@ Z%0 esy|籙6d= Z?kYt_cGo,xj6#>@WA/y~ cs*r>z)dbg8 =Λ.8_RoQ\? w^uϗݚ+߷ \ ɜ[s`Ovۖӌ ۺ[@4Լ#r'i IBM])O 87_FwѻOrCWcm\]' ~ O^Ɏ-NԿ>O|05 Vr辵6?vns;py}Sg^5&5Ӳ"Ѡ|89Yjt@9aI`00ym,dj05c|o:S:~}.)Қ Ia*,$mĘHsYD–"n9km76y㾁eތI9޾}wl!\]J'k[oߠ AgJ{8_!eE M x b'j1Ejy(zx:(QH1lNWUTuV2z*5ÌκMJm%b#5΋XB gBв-}z\$%-_E8ϽZ+gg9j#@ \f]颵Fxz֧`m8r\RY8%IRj4B{K"0%h/*hlPBߕ tI7szpI?\.6t|d"ބGfښM10X`˫»2o#;GoĻ'/!kJ>r9K+ RsΠ3%oGHM+;3BGaZ XTJіRE2&rwjCi5`Z¡bjU1Ȑ%+)4 ,Yb2*7"FIf}VdR: x,)S]"FK540jy)M閾)kteǫ1 x>ԣ)LJA ?o-y$-wG,`SOHq UGSSPHQF: *'MAD+59 |cphi2f91{iJˢ"{.!2T,DtA$3z` *"θ*Io9F6!$ġe=]-|$)r";CbS#۹_?|EPT)x*\`jg ?[S B;sgN\(CPQ&Ж1iFP"J0FIJץ湋!K^*a2ވ';`&1Bfa9̍bHh/|νs<̉ռpܖ 0ΞY9{}Htx8 )%U΄HE,1I Кqc} dcbQN] ޠ$'| N#H pVM$SĚ9s8 6 e]{jWS\W]W#iJ[ΟPBvOV\!R+P̶׊+Tz֊ kYo>lCf_GM$ DҍILHRF**#S>Lj^_o!ny?{`'6A%FT'a ¥xf*W9zLJ)nbN:wfzS"b,ߋW:< ]m;Ы zi\ϵ]vz{ѧ88Og6T:A핋$(U>P, )('$`ϬV"܋D(ky[ G 'I7bseR"YP$RZwikPn{؛a yvf"yXX_w[}>y1=>Sϸkp2L~wq㧾tt3 c辰Lw+/^|ŕIzpPgF[c:8(:|qhл"o=em3u~Ywu|*c-dP^Jdv[&b<1u{IПb÷8G]2. rnEr.J, YƒdnC[O1hFGj2:i:ϟw>[t12*+cՂ4p R95`w;/z.oiwxT<ܥ9V6{ޝ0Oc= uz~!tHzvB]TDJIcrLxuN%9Ly+v5 _9s kY$p(SJu~U7~H3B8 g.u3:Wr:=D8_+u+r;?oŐh:r6[*3p_l/]M^t_H&>߭u?9o5Ԅoy?qui%$͟޹ Ӵ W͵ו;=6_γ_O B{&*kh{]~،yo,;xLܸ~E -y+nv̬ MCZcm= g;Wo<'nݝ4|.VgJFyNf,zZ2XޛH|yNF 5-v̐E f4GRS6 !BL+&3S9r%6"4!BWV`'m)tڸeը& ;Uonz!)Ju 6pʑxB +=DMAj=DrTy d4 2eEa^:ǁUZt&~DR};:D:W1XBXEӕ$ĕQG㹘}m=7s~I3^7³ԻtͫΙnmU}Z0oHBt٤͸cU)sHw^I(K'+|% ѣON{5 b LVY$]27_j V3qvr ʐRR)d C%}ș{C 6N,[ x ޒ9\3s5&ḙ*1Ze L2+#k(l 1v:Q1Ɯ$Z 4pZ [|<հQ1r6c_J-);]AB^hKOG %Ŝ^^bE\rW/ њi9[yJZ 'bP9%҆$O<S`$*ox F(3dVLuP-\|m _ŒYHxeN:ak/%Q;˹%@FV͚[AM^Pp$ط3fM4ŏ'ςEeq0cɟr29- HEodpQTvkziF)Xް:Ոs$Jz.`4LA47>{ⶒjYF$.E&d[}Had %(H @r D[^~p%ʬHu~cg;>MkzG{v˟^o_sw t!Ѩƫ ,:/Ppʋ äȸb uy6yr_;Byu}![d2t;JGP㈜r4 7DıL9Zn7ҖĽjK|ZCd)1PČ@?:иt9G﹎I#6Ֆ~U=ж.q~Gr@A^b V'xL@ӝFNwU5qKWYnu]xOu8^ZGq>gʚXv_{j{A//IU*I^ɩTmZ"V8e29bf|$6q(ɧMڠǞ|ڤ|4.mJ>OW4{Qް^4w%?gw=R LAYTWP(40ɔǝLl]9d0d=e?A:(-y, b),FgcMҨ#떺R5rBʐeb GdokgnM]~jUzGoohb}x~ ` { ]WtQ 1EkN5QT aٵ4q(Y!X}v6'=k $|LBՎӰ Msf_HO źDKEY%$L&=MŽ&rʡ yh/N))Iȋ:}ԭW y^c`SU:yhpP?}9L=Xᮽ~@356hւy;׬=ݷ0|v[C  (抆cmh% lT>r:^oI>r :[NoZ_'f(̑(S`bq72KbT`6.ha+1$B/ևSqN.x6$fEtV@yT;ƾ)\'\r};iLB>ڌ b'%Bm~zǂzFчM`PGFqqMZao2q soa{?zoMCm-R^,4–frM#'4جCYhLM#:/wbc80fnD,(&bwXQSFe4ԬB(jUZnQJIxWMTM,%hKjArPb*% @s{7qv%j4>݆p}^>nÏŅU7TA[ B[56$T˜诬Qϡ%"jIپ* hgO$Ú (I0ؚܓܻ@8;"|=K][+4/OCndφsCBo˳Mgos099"P\~8.oVK{lї,V ! :A}5>(/<$:{j%``d*ƮPsDmsڈ),ȅ\hdB EДIkmCnX-2&[xP[x/Qۯ oxskz{7,\ i6ۘ"6Pe ص>7lf_y \BLkWm Dƚf6<`0*N 2W.b#BlŦˋb|1Y ~{cɒR6łTUZLUHMI~3ž:St! 26 ؛b"{m>jQruYR1yI}ku}K3k}kug'8YĭQ,L93ɒv^[XPNUDE-Lp&m9e29 Q+7O1|60/^[" RI<q;[nnNrq vH E=ZP ʻ0) ہ5K_\GBRquFo!kbA/=XeMdy[mzHҫ sa y3ug?JJ+Xpi]~K"} 9gAE\8[I1_%By(3jv1<-7A6:K>kgYp@O>vQ\fMIobgAޗn\ZݛOݏ?UKQL*5Z_ř|qmms{v9zߎGo?΋WNz쌶 A\Ew?~eg~tp}y|wצ?\.k~4'<+'^vZT&CE2Y~i.u֟v5 wUǩp8i;R-ڨlH[H6ſfuiW3J 4dr.S)6؁M)OƔlXU>Ym*jt\EڴBΙ]n紱%JV9jC|plkj{! eV;8RX1Ma PmA7C7/&xR!K1A#AL&񩸘%=~[%G?R ԡd#Sy7p3-Ls#&9Og qT:FmΌ*S-6%ХX1c"A[= #cG?]]}[ Utz)}%҅w`F*EJ jt*R,8ńJ&֒t81%I]M5x-  _{ܶ9S,v+6YYjmi'\#h_Zl |b3FC5ނdL#Q&zx]ioג+ }Jf`gL7ø1MjHɎcOUsҔ8۹UյCW~>ZR ,ɳbiate&8B3ZX jL9Ȭzzs)u)lcx׸B/|\\y y]\~便&LÐiTx-(Vm?[F N#wJL;{F8#\]' IG+̅B0"F[?ĐNc:!;*1P-2W!FS&FF,F989S()OR8NPZ&lPG3 <]Cn |x |Yc3}9`="3[ӞѻzlO*qJ"`$$jI$a-j#dO<xr:ԳzzژGKs@4)X dQs9u4r+"" Ê 3>̃ Cgt8>2sꕡ 9H,`ЋfBh"֝yhU4JۯVa_kMb͚ThCp@aK.7ْ_tpb4۫f@R4%G} Z$5ϜR $)Ud,#: =oOimvC/ѧtu> x`˓* FI2t?ȓ_/lOO*O}.~y C׉?Ӌ^c!x;7Pߝ_\c JNէ*8_LƗ0~,\\QuN@NR|n~nA3uK8]3=:u;J ߮u?s5e<|t~=mDr2$M)s.ڜs2fXȞ4,iХr'VHg`g4]^]tSkscNQkmw?o< -ʅ窂ío)Je}מW?Rp5VփR,wt݉`fm뚻jA F|9ׇY2n!U.";v²>"1Sͩ5; n,&*fT${X0tvIutIk,\kӪe(4 i @# '763贛TLLZ 裶4GKbEH[aTM;=kyasi|^Mºӛh)pkjIZ`"; tU&D Hp'W6`s& A<8BhJ !Jy)\:tٴ]m58.fQK12M"ih, 1ru KUo34Ϙgz#EubP;A_{Zw9)PB 0`QK3K VKQcBDgCqdۤN7KiYK/Zvhȡ!D%j@uR_c/b"(b"fTkmCnFp}ߖ4_gEg=!t84_QS,@15iUݼ:9ngch~F@ߚg=|59X\7)-S]i "ju6:+HVڋT$R -5?HucP{~7| TR%2Hר=ՊW2>Ԋ]^ν[vlBE:V8{,B-#J:)͚l8'Nw޶蛺pBZz__ʦ_q^%r.Qyv)+&E)&/8ۻ޷x[֮˼5͇5X}p?ay49RCa.L(A1u [*paΚcϜ fɓ>+ v8\ ݷBT_;3u[,,pڲ;A snנyvYcëfϼ^/X luќ{.&|?V-jT \{]ayX}_MPQ^Ts9?N>>rq/\t$Mk7OZ Gu Fkpg5Wxa{#>FPFF`F%Ix-*OUrB#h+,$$ zK"<%ru\Jx)H֥H,X]]HZcL{ 24m=58_nP]6 Ww>L:$l7޿ hj=Ђ45,g.hjVkjR^S;BMK#(/0•&]+DiXOWGHWBedAt5•Bݧ+D2g{:D5+1KiO庱EQ|g-%v6` ׊ȁW S'd< _EM=&f @S%Ԃ{vSeu.χMCI3ݫE?siKjOa,MW}5;_^ a/1~o;-+2ep MirWtQ]!`U]!\]o*5M|iOWCWM/]`IQ./s$+DUOW_ ] ޞz|WG}\cUoRt,k݂lOW="GWXb *V ]!Dz]m螮B K‹+Ke)th9:]!Jz:BTj +l1tpu1 t( c+(ԢBBWVQuB$OI*bL9UZȟ-M\١;SXn~޴/ϻXXf WtP6Iu7K'J5g'jS)90h<!\Q ]!ZMNWH%`]tpM1NSV{! JJ{:B2n +lY1tpi9UWQ=]!]gzȑ_yyX"YS`,d)?Bdˑu9c{}}/ >$N& Βfuv(~'=0NWWoZw4PmtE'zГӆ  Vj=RDWGHW6܆ Ud3t5zhY;]e4DWGHW َpnhm\;] {DWCWԒlXdp\6cj,NtutL?tsH_ UȄ 3MH:k/@>:YQfsRtkσ{fv0Pw Ğ+Iw;2]3OHWA`,.f p}31V?+zZQtthjut5PʉbOW|g•<hkDW?]'}aA]l<@ҮlAWDWz^(l;glzx7@{%Pm ]Xf pٸ@ft5PJ<ҕVޒnH[+gR(DWHWK!yp ]-J':JRֹd7ȸK&|E4ֲ:%3Mܥycf0ki ʏl͊77"vCG =|tt&Z2rrlg€63O>;2&"݆ ngp̢@ָP':BRr6ni_!>l@dt5Pz>UhuKt|4@~38P9CWġZr,rESp෿B@;hd+{YϠ+>s6DWƈ ] n`NW]!]:6DWՀe+t5ĕѵ@IDWCWD%c ]=QjիRY͖jn3t5F ]mnt5P(ʙH{kWw@ 3kq/( Nyrj͌vh'-fO Vq>x!3+3u@I7nb(nvfeC=ƵPq㡫 D6DWě7CrJ=1ҕwV_l ] hkDWHWQ8] w;ϼ~o]8t%Ozya,X,fU99(ņM ]=w'vCt5ǰ\y(;ҕ-I̓k7@쉮uCt@7nF]-9v(v ] KT] ;#v3t5ຸhNW@I|: wCf U`Gk`m =iMmy}٫?߫Wb|r?!~loF~~swՈݷ;H('U3u 2kzi./JM}jDyDj̛5w.on,{qp].a+p?ݽһT.n?L>6_zǿ泔~/<{>j^#Sn5gWǑ}.>~O_؋׎~\Q*D 2 |F|5?f`~A}xs[>}4?utkWya{o^'[]KbN{#\j3ųɍ2{*u|Ēǂ0|ۿ|d;ᱣ>~u~?hmmW=Q1&R5JӞpK09&Ƕe/gGi<&m ׳5hsBԘ )fAmκrTqa%TscG:,̗;yAA:RME@1b ]L!V ڜb'ϒ*~I"ԪѬ[%ε`0"4hFJ9E}OE*rz[HXݹRe3D&H5@&Jʵ 5Ԍ1"'ZpO=!zd0vk=!1+>o8{PtBREW"MN4j@@sʕA&bd+d)玡dS0*ǩCf[P5CBq_uqDy&iIn]FB8FK{@1#3X2,7P$}|:oNBUi5zpsDm:G5{\5T4o^G;KQunE"NXJ҄GYG( }m-2FXKgCtK5>JJH5%@_ "$5_u1MI'ċ¥!)XӍo&hY=9X sSb0f9xDN5.E{ ubQ =56Tnw7!kR@-M(VL$X @G^-TTlPtԠ-!x0(crC/fĥ&.ʼncL"SC:"/} 4`7|Lkwm|P}-zd~pw zPZ$ >:%AuPi0A\J_#똕LQr+׮AhW!ˈc*,z;%Έ ՜ (dUAYX4[i 3*]+c, LC l$RbV[.Qx $nGe6G *YȩՏoM?/j[D"ibxa&k)_֊ H b"ҧa5tS,u4\\H{*#Ⱦ1 ]{ #Ԣ%DErtm xC7q[}c$#l]D0ePExXKV !nK ) v撌ڱ",8`̠&(CiBZ\ 5CN!4E5K(#{˞;ByN$2&e#kcWؘc0 "t% L1c4$C VA {;[ZX80L$qt3%VB8ſ?qN3u64M2Qά!;XP*xVMOmp$` p_nLJH03 ڪ ]yl8Oy~-?_t8o MǴ7WoLc$Pe0 溳<=.-CIǦgBLeVnMŎgI#^y"4z&x31r @yyӣ"'+ÁwCRC^;$bCMw`E]("6J1N,$\FNPPj URO(cDdM|0#Hᦫ^:H%-йgrd[`ikژg.ldZ'kѪU3|Qm<3iXX=s 4ݳ u~9([vLU@Rc땀;3z!@Nk (B\4j5*0hYZ =+υiqf0Mz|dNEEqdGCXe(6tqHZg…0qlp/\\K5H0IJ\P Yt6H284ubEWKB4&y׆O]/ #B`浪`zcT~XǷZ|uu{p6R!hTI/sFDH#0oGX{:sdV?OF\$q([B~޿v Zou[w}뛋_X }oywkn7)]0.w/ܯ~w ~e ?;v֐z/—ty9WWmkylS ئ݋mMm⋭x4i{Ѽ8{⓴I,QԡHxG|-Z釼j8Z5uh)z2QnZm2}D7MSO(iRN<̪x( mVvFEP~3fv\ANڬ[*zJt[G"g#FGqxEQN+EpW zz]~ pa'C +B +B +B +B +B +B +B +B +B +B +B +_p *5\\J@@ύ"wE >1ia*+y| YղA%qy"C(jS` aŝj+ܕ/fk m"F'FDwYNM!F&ldF&ldF&ldF&ldF&ldF&ldF&ldF&ldF&ldF&ldF&ldF&ldɆNɆ0`!al^ _EB +B +B +B +B +B +B +B +B +B +B +B IdO^OҊTb C-|~Bo\j@w`m?ćɴm8|k`0h T<Vn3|ٳSI\%%\u暡UVCder W+0XZP +RK\ !`z0rUt(ršUҠ\=F2J1I$WB FFEZN̡U(Wߍ\ɯzyDm'{Wn4w\J\ݶh۟\b2h_rz(r5 dD\=BbZ\%5\%sXk8tJVR(W\o1s?J2s5!Cd-;x*YQ\XȰd.t(r*YP\IJ2 rmU Toyr>ymv=߀:`Qª39pkOY.zxcSVFH. az}ʭ ]޼]AIܳN|Y޴N:8#h {y蠐ٓ5t6C?^r"gUɽɞxCj 4kOn6ݥ︔+iּS&C.TP=\.lYVGA%A9q{ /ߐ .3+}W;YkJV2|UhB> VK:sP*Y[L_?J҅zHrz8r[ULRG(W{=fH \6JUVCdrx AO}g`A8(JV2rȕʪWGhzN3Яw3W<vV=P0Z\J\ݶ!B#RP5sC>xJV'iKl/JR3Jrqrʕil\%s)\%k99tJV rJR&6CZU2)F\$;be m]KC d&Cd.`%#eQʴ_d-ʞLꦭep!0wvD!Qa"z~*ɪe 恊 GL^@+u( QͯwfWh z$xc[zG|8cԶt_OBknvۼ D"͕uDgUn"uN/po3㕗]rwW7|YO|ن-.]bq%2vr}]db_k  OWrO6j6s;?ڗEڢ^DjOS7?'%q:I5mfD]$6 |8{Y_soYףcD(6Zcڰ@1ްYB#Z"KLp0SUF'zSl=ܾkc6Je|>^ g׊fuiݠْus\ҝ!˖iL'0دGuO ,l:hfgK۞Y\R hvd\7oޭ/kS>+rjVf q8ynܲ涴;rHONfſc!ҠtAiDQp4۴EWYgEVIV J,-P70^0<ׅ6"0 ]p8QFj g "r:DOcG5g:'gJ$ERR]R)wE+(f{eaD^syiߦEWIG'nd~MWz&Lv\3Yh'٨i<>=/~A/S\.vyQ8 $Oo΢] jdiџ_Ϟ=>P%Q6e.dr%NjRI[GY,rݖS]y?ѧ7ק4ź>8*PP駬wc^h~N^>|ޢ,JtR1ZcF~mp6mh~_q!}ڱxAz{,t+bI\W,~WvbÐ3;7f:.ς?e|۪nB}0nz7, 6ha-Wj/E-Wǭ:3FSپ鬆g-|qm )5?n]5u= ,-M0ǧnjyw"Zޤ>M>Tms6` Oل>Z\{E}˹CI&Cp經=[(<,/NΣȍKjhtTGS(NdFks{olMV <Y@ilrvDUqa[-Rmjz5}դZC;B$RYOQP*uGh) "<K*"Хs ʖ(z{]<ɛΧwTγjfWΎ_>CΦ(v=USloAsq_.Ѯ:UW!s>dz> 2^ :"𛎌{~FI2jfn0VbR$fy(X|: .Ehs;]I'mq}`&]6%E+ӆqZr9}g֝zj*V ݕ6V3<Q< ǔ҆}ȴ8lV+LoF|+[cJݮMmoR+eTSeNPg"$B!EiLRPN@4^fo~>P,Qa:cK .~{8hDYZVx'YF~:A{k o緳|@޽p[-wnm1zKbcg}]o;x49!!y*c5xˌ?x떶" )i5d0DI!7PE@L$Dݖ"U\"mzmmO3@a?IsU?R5O wv?{ٿzd/ѽ4nנ7h]0Ou<.d=V`BL[LsQ+@7!R= r13Hu<^z0JQ׷AJSPhqR@sZh΍p12h= [hΠ{b%>(I8O ''c Cl5E[f)Ր Ώq>k^ݎyvn9V>{6<"JlK]EG nQHcySuwwFygmt׭2{n!dEK;6s. |;7Hcf9tRgqx\hښ};@T&kU5RZHZQ',1$jE5Ϊ@F0Ej^A R裑T&C(uє)Vhk]D(P c1S16+%BExB`\.[mBHk֦|+|FxÃi\@ЅFNp>}Pf}f U{u0zӑ*Mg-"u~`dM<٩|J\O glOA|FJydzaʻ\^t9YzD.U=qx;гcft|:gĮ (uD|n.V_ۯ拳|lQgfvO˦ٙo'_lsԝϜN'ǽ4O_GৈP<Ғr5T-|mƙG9~n+$Q@EԲ'RƄTHkt#\CGzJ'{驦. >cNO9n6[i= .!YrZ! xNH"9J[ @d;LM mN!$Z %RMIb*Y5x0q8+IYy8=xE\1FYZ@ֿ4g_s @H(jdoH6˧<~DN(p-Y @oNJd@hs9F 6JhuI !`Q3QšgCth+^Sv:S>dl@QL%&S"E%G8,ÂLtArѣuv+曟|og}mϱ|Ǣfkdݎ[!cRzJ䴭~o+>SzCKTFεzS2G(&V YwL%$\^`0;tt?rn-NDL\ht!9*tDֶF0" HkeUKn$6{2 / J*G:!N/ l)eOY{m`_zUͿ9,fԨGoi~?_}_}=ӿ={{4Ȋ+zGn¬l{uI-De3oݗqk:Z Գ}͐/7ȾoN;[_7??u焔]ʯf$P Cjq4a}_ͼW,*hB)[v4;U:K%՜/Ϻ+?=y;;<4]zs&%o= U )ѫ;gjߧjY49B(a_7v|uwBjoBnϔ]3Pkf9woiwwy-3zaæC8dfKoo7}C u/X&>3U?ON{SI3Gؗ?=/3L4/I7j,ޅdլ?,>F ~er؉XF p6CONU|?ֵ[<-GWFJײԝFK}yY A׬x(2)2סQZ21Z|DۖDp=N#Ȫ6yr%CxD6;lծ͉/ϵxMzEX5^/e䬖~K!sL~DNC6 a80وj94I I:-Td{-PWwc_MhbKUQ!Ř߼Fq>LXk;xH+\"}hʦKKƱ1z|j"ĺ+06FEDEְZSpc)="S&(QCFj<ט1H]ޘRݱ@%D!R^5`ZLg[6~* 7(bs+wu*1gkG!fI!4A@Vi8J*M-=1 b"deȖ" &JēE~RFR>|-fo`z5&+IRfg%\J$ ͆=@A[Ґ=a% aրP+q']jQ$)婺{H% H6B8f$VvR?65 V5& 6'Ƥ<V9 T\ (RGL ^{1 X/HLYiOnԶ2",[miE"*_X`7)5)u[&FĂcMfD<$R&V '9U4$1z,X>mv 6CՎ8vo1 zz\kNa]54v@ b*6.:{7g|M+Tb'ucd)mr-)$eݢ&#ƒ($+ "0 FJT( PRYSua-DH \ᡏڳhPD!+3'* aCo48y)~LgBҵ%= H@/椨-dW&Nq㷽ş!Ȩ)ۣ)trE Y)Җ] %eJA}.Fોh.q=z.X2d},CÍ||  A_R*QbQ@wRW#SvJE-jh ዔ^Q6DJ+cIȘ QǶHǦԓ|o)u>[ ӁTZH֏^̽>Ps^/ltX}vANN۫37μ<8}?`g4t:9.ߠylӆ~ǹn1h?G {ㅨGl 9uz~bCH эZ x}dLic 5F(5Zi̍KSβ΍6ZɆ'2*b!ib@޵67ncٿ/;WM#魚[IχIzS -K$;5}mI&e",&ۢ\^_ҰPZ*d^(G=FHmIwi7SD%6hɋ|amP/ VT\Ë71; ##Bꑷ;ٽܝLc*ɮΥ>kS벳t*;'fW%|nq9sBJ%]M7iYlH*..:`ږ$ǬxpVF e)\<B%hw$Jc(WHyv7Z|n'Ղ2역!#=UIOЮu @k[A+z[&)f7Pq|3 ̄ew$[ubb{e \ |>_͇n#cNnv[T}|ț3-ք䎶ϺJw;^rܞQ[zJDJKJ+߇q/Ie|]^u5q EL.;P9`o1ȫ#%R^xp*eBW+yr:z蝦&#Hߤn=NuaE2ǿ.LǻcSpet0P)J &=erít&A U(w&taۙ$qs?A$#bM.̉\4G,%|>ǵ/SHY*_FB^H_Ϫ FQh(s YX< X ټ㧆q7MJADϥNj Q!6aN%P%Ŕ e.~jqo`x{k+TEBOi#vezmuq`މ%Ro[_RZq' ú>9>q9TcE27xccvٴ1EY%) ! eŽe0=NXr#k@RQɒbf0PV.)MY,&838 RvdNϮ˛t^W[Rw( \'t6i-T^uϧP <Z*r?Ds-hP`zl/ mIeeQVƄS=8DR fd0e)wJJ 38 _uU:,@.' \Am[ܢ2QSaa |zs:n<ލGgl"pL,J/ +0m^bmF4*Zȕ, 4U267 / &W+Q(4APPL(*}Цc ]L'b&|W1`Cjr 8ăR\TE‚ [Ϣഀo:ӭ-FR2(C&i<mr7|5Q޿qO4Oh!R9xbLg :\8#s==PvH ya+^2՗0tEJ/F Τ "iA3&T ,z̓)J=Cc0C܀gj (2R;-,YR()1B*QlvK*X0WfKB_R}O(+a<Uf4:r)-qqz"^Z(t;S~xlC^O<"۱ ~1SLcUދ{'/"QM jU^*;Ӷ;TRI DLu'Xfsk9Q+[$l cFQUw_)ܔ/`T(P1PN)wxr {9ҕBR@"ZHDiw^oLá7DjO 0t8U;D{WPj/2- tuSE$a V2B&o Q21S\֜%CWWT ^NWozbB%DW d=z"z3+ܴ$!VwpI&^]!J>8HWr]+;^BhFI ^0K;ixBt Unp5X߽11s0Rs)PB/f\h'b 'h^r nB mKs΋\8rB3b+gBY,'5q\7 cUUB iY]d}dw1zmВKY.8 RdL8A- `QeF:G*\xbB]bmSVh^Qs8yHz;!Sek7;xr)- 6*zWT ZNWrB8K f+lT2tp+C{O( .{PO`t2tpE2 ;RWR+B+^\1ڙ_e0~ 0GU|O->U_Uo}SE5ΔJ\QVZVX[j+Nj _~l|7Ypo>sқ8AZ @530^w y3GxL #H8;f8ϠףNqv?W({Ej^v77)oVV XUUhmڍU(ncrw3ʝ`Ϲ}<=7l`=&x4["FɃAZRF3jAk=R)̓kՂ;,܍O&u>vv ^g8UkZ2Sm2X6?Loa/ʹX~Ww`qٴQvwOw+vF^dUKLi8U64Nw~ŏiCi[5OZQC[ݣf׊MզZi]wh8*.⍛-ON/]/0[2+n"TqzAC a۫ Aw:֛-&b˽Z閻ҒS k^?Ňm|f?[)AoTUw{79 SVV+Ք^֙)?#;ww}{{\TPۘ{[teV%aLug[ʻGtʞUnG{~Br̙qff4EG3p 3`<ܣ͌Mg᝺}f$ ĿIdKYsV) \eÔd sv!,/a=IZ[qދMk5/G3+b`6r^2ʼVeNFpbp3P$6$T's<~Qg2~RqNEJIµ"ҾdC9ҕ'u:SKi W$sVѪgbAzs+Í)66B֦BWVdt3++wut2OJ+Ư(\Zi1rHVEȭU| GfmЍ$h^Wv$Ц WkImȀBmD$ B I YhHJP U SruNЊߠ솫X WzCWӕCi9ҕO΂>]!`xWfޡ>]9Яy#]%ڧ%zВRerhE·+B :BBWwP>|;tK_R TۓU%z{I5K':ɩʮ@WCUό#B乫jp7t6]]Z%3+n ]q#)5ЕC+:]9Ztut%*zCW.gЕC}r(s!kOWCWҀU#RrWS*ur(uOWHWnEm/apAJ+.KpT[o$ ;`[P. nemϭCU(7}ŒÕiv!LdxV`+b%I@@F&0" 0Rrc.ݏJU QM7| QjHѕb{[T4TCs+jʀRzCWv¡5;]ۦU+t`y \MNW%#ϑ"W0\;sZ&/lͻ1 tχle߈32Oa"%DH :NTs:MFd6fP$X*gGL맫H\p$Q :t=No$r5(#6뿖hT3~>XMbTjT yeho1"XkL8V6c/`7_ڌO4L~\ds bI t-?`OMc2/;>/g(0ˢ%\ʧ'f>L?Er}ݲXQg2MؓbEi 'K ~Nfg6YF6}ǯipD?_?0`#_PskA黜dWl| {p@෌d?^e?qvvcdՋ?2ڨsp8`ZIǝ&@AR6iR%bRZS#_4#oĀ2pmk4-=čP0 !ďfaoI4lgOLeP9ׅ{9٫hN.h4`;)9mqI9iihҜ2G.z,ؒ$P*oI'[R"zvΤrWʖƝ&Ř԰$+tVlN"vhWC:v٤ۉ>;T8h`ơ?=7C$6U~oy37l8/U$7`zZ%ia@ m@\SĞpTx7))S 2a(ݍL1+kˬ~°jꬴ=MXI TFu;Ɖgb}9E6Z|^C2xvۑH;R=\jsqvX?A/v~^2kDSv 0!V2 rŝԨZ6lK j+iS *HyeR& 5V1VǤQVܛ/I#=&F} FBG+%xCW ]!ZuBkzjN/启?byĻA0\x+_uЪܖ|-JI7_W0Hȃ&Jm.!A`DWn0@j6`TRĤ(V sIJ8԰$I#QR4g*othO}JoTOW J s\ ]9Y9R\gIW =˗7R_Q~ N,<˷ܚ>e#0gx.fġ4P i|h`#Gt?\+|+D:]9\tutJSi vX}+Wx;9;9ҕe(]!`rpZe."w%U//)$jלZ~w̻JtuU-xDWTBW-7]+R؞ΐ8N ]9RBW<]9z:C7\=*]!`M \|+Vt>t( Jr%5rp7C}ʡ43+Ŵ5M﫬xu a mhoVT|2Jւ]ڜG@RT\wzPOKZ*U\vrO|%rzC0F*M$ɔȈZВ+F&ZZZnvF66EaZM8%IX.Ug7K;h2? ʯRxKjR 7HV›H$ZT_;HN ^erpZNWΒ45DW؀?tpzCW+DMOWgHWj%i{ǽ݌ /sqh8/sm.0-6c2F6Y͟߿th0G!H87qY043W   6p`̒tDL8 ]D7N3]~L"dt6~ƿ"Ţ 8I$8fkrm{d\.&kzV<1lbg0-/r .zQ%> Y6©u)Ǔ%9)Ku#ߒIdv~esϋ5dj}\~gk=N4ΘG 9_AO {7d,n&'TI&6D TRҟҌP7tZ|@̵}>*,W&Ը |n8_3R`=/}$AaD1 rSMNgF 1^ kv%ׂ ƴۋt@BQRV/UV,-kU`k2̠7Bn1x;OKa` gy+3~#w[<ķz},6hlh=r݉ϔƦ:Sr3}Q1Y2!?p_YRӺz oyCK::a'ʿ?f\:7q5=k p$Bδ/: ((p+7< Ԗx~mh`<%0E_UJC?u%Uqy,U)oaemeZCJFNyC@uѬ XoDC$8s}X<ʦrٴhGpȅM\^!=֫>?I)~ #U}2#Iy-Cҩ5,k?:@Y0:Gb/I-K_?q/2%L9w%ݎQy- G%J9 "C[}ɵiqTDx>񾒩J2d~Q7kzM`us܂n0(_8,hXK~}ml@5wDKPLΥN:BլUvYEә53uJAJqkUit{tUij8=V2eO$ c)ѡMG{oc+HRC\DX[ 4:戮b MB*-1"E0$Jjt̒i7ߤĻ^oj`=w]6/B:LiJ~R&yJÐ69hNe޵_-Ģ$? D䆏{Xъ1 !Fi+-)xZO*-*=.jFӯӲMn:)TE 82JADRtHÐD|g&(]杳2/k{'-6GIcJME#RMwy\~'ID"i)xZes 9Ŗy~Ş.-?̴Wi42Q01Eʢ7ldbc2C֍z/x?łw4T ȑSݷ2iY$%K75 .Ch♢*Gi:b]N?yـ>%/|PH($TkHDb6;"pW s-')G:J%3ye$@AW<ߊ\2E:aD$)A HdD҈kM𸎜ʼdZ95eXtnHU5 ]OZ]u{YcPK\ojSi"^0o:;YS% CL w]^M?Vx쓏$(x&NnWc<)ZÉPfEh Acv^?ht1ykDmHچgnޭ}ekR3RL\κT @:)(3q9SK]qOcW.s1s׀+"U]y α49_",si 2 TWd,JQB\q7kEZkdU1YVx#+ _*9.ۨ,Z7rz`MEvf92{{sSXԹ|Jj q<*x [rmwwmU+!6[ LU1 x]ڝdk Æb ڄabG=jMl; wkkK#9; AHG1 SJX"X5 )&ێMb+`XغvYE1(o&%kgٍ.j((jdM72suVs] 4Ro HKΐ4Qޢ l{0 zk'ፂi{lRsX0C|-vrD b5 т3ip.d0x'¡Kec,Qw `W+zkK,/vk[}0U"/`{Q=ȒJ#f0ᦂEr6d'5`\`>EHZ͘6 ~:j\EŎl&2#qOZ]̟P6mD*% xBϩG}+EԹ+d%qV"5'+qRJs2(2lj #L5'|x0 A3%Q>+eo_3N>?"d}2c Tܹ_" *dY^ʼnk4pҡ_YӵvOgL=;:(iesK\"`Nn$%$(U@ZPyTuhӻ7kp3d{{d>C{ f=.aqѦ >80GPdphU"VY^Bi7ϼ-oC%pudyG&Z*~mHNVX Nu 17kO;PRի\$d{H|g"J3,#?dCj4rRHWdph)dpTpHZ8=Ƒ?|8-^LdmzbFAvO Q(ke`PIPs [g&̤ǘC@Z6Hcy0^2tEX">ъΌ*{nKIu 5t`Z1ʫTQ+]J47d8mв8!^m؊?7xBeJ~>.=q=fUyokɕ'yO ?`F|.n]7\-s|prP' T }mA63Fqotf+bIt>ˌn.?6$A )%au%^n,z4(T]lucAmK31k=bww-Scs{D)=ͶIZ8)=F!k(yۡEPNA\@)? SW-`[eBIkq%#ĨeUt4\2oy*nnj0`m8Qb׬qKg*{G q-s}|wu 89Upfp6^+ާ6 ^Y gßF'U\.qF˘ɲGT^T;xxCRr7 Ti|Ӄi=^oY;|\VԷp- (N2Ǚ)ZiYD۹omxZiW~8z:>e#%!V/ٗBFU< Bжqmi18[CL'h1 ^W!h}gzMzǂ ^S}AXVpFWig/$Pℂdw._yO)S{9sev8x@};R8zXKčE%NA Tဒ*1S>߱ˋfljCՁF|Zݧ(8Q21;>[9z>̂YgdĀJ?T[g; i/\&Gg*q'gj!qgg'}G QƐ<5h3 :fAwN($ls4uBLі92Up 1#.FxhA7) U-jmLS`*b8Y}XR=e(I|ɤmL?Jcʒ>K{  ?gLA #x߿~瑩&|Xk #`?7gѼkJe??c4$,̈lvkM2Nlo8J w'_1/Fe0O~ TCd?L-q.ᢻ}eV?.v ~e>~6_F×li>KKEU#9QW-48wgP5%̨tWqKjwG]1оq>[U>0 =(%tr>{Xc0M.注ua!KPëٖ2F6;l  =J5]!G4L(w0*0{?*&tS_uZЈ c3ULꃿ2. A*׈c)QER1N6v^,on+tqkf2Vwd߂Wsk`TX Y9!IvdbOpcp&δQFI>n3*Tocw]N]uf6,_Sg+ZjX@fb0Bqu% f-dKo_{+riE/^o+?73 l$S5u_QeBޭGaH\(?~LJvfS^ Ҕep*`.LYm/q؎R ~Ke1Jaڗ鶴U'B8UUiU4L%9@{9mRsO"RɝÀ0 vb(>ع}i Rt3[CS ,z1`81:n $*@u32(Γ߈.)qOV J|g:fVvwӬ8RIG*| 4,N E~5G1Hꄬ񣆭Bx<%Q%IUPXYI TSq.bV%ٮՖ~Pr]^=Yp˅=GJ]VMOG-$ݧs %ZДg_ {k=MU O.,=4^FSykuCBHI饓Z4NQ,B6l{8*0͌6/:fDzK?"}#P&e͗y7 tJ{ w4mܒO-R PuMUlt@/f$dI!S{4gy%Jj$Z3 o7 6XTEz8'ՙHBׅqB+fNi~uU\ o܌L$ 4!u# x1iv:^1ҦB/Г}q7WQJrL9L3- :幏.t[M{2 &aj"(}x)%S=k&kn?(Ccv\e/J_U:Q!.G/%VqWy˴FɚO%).bf=x{+K<1Z{ZE뗧Jԭ b繱9ph1dt}sb81鉃ܫ:SXUn #c׊AұbɋFXΪ[56ʋv{%F`MEab7W|H4?dL{vL{zݏ…^Z85\rq}>ḱOY)yh U2|^F5bUT4gm$G3|?XAvsp80``,Renzfck?,*֋%1z*^.v-hۜcwepEd)vThC՚o-8 qEy9H"1$v\2Jf G!Țu='7&޷XkR_2SNL*[`tnqgb7bi3>tW}kz#/|Ϙwo\ؔ!ܙ M?]wyG;@lrOimXι-}]ye֖Esu+oǺ]R$\FeU 20ʮ=x|i'uE :3:~;I :ݿ7aٜ"tQha4 5ɏFTT(G^bXſGݗ-0QH~E6˯#LuYWUeFx(iv8ːJ)^к'ʳ1=upx(iadXHRW m F[`;&P ìK@!ƅDkf:c7Hwe{/a)JwMMs%ݷ'ns1[. ֍+ϧӋ%-NrsX|>)<ά/Nv"km )5"ڬ=NhS>n~m.޷Is=QI5>Wz5m 7,Fb̶AⱅlI Pތ iOb>2>E-0x.~X/>]f>4W}c 5Qyo8QfkDcx"M3^M!J@0nu =}I>է(ɐw7zl~[`Yj&%"FP/]=UVouQoٚ?V-Kg02#uѩt#תƎJk){?2[gS -4b%De)r4nr.[S7wtNiIq|Uq TM)5u>!8 Zd1N)ŭF"_=j!|\tsI&QzJ'k"bGd3#$<>2v +*D`$$vY–1Bfnq B`Ӽru_cGSI{p^NfO*hQϿ7=y B?K=j6;_vZ]y%oN$ᴇm;I=^{B0ZQr%O8TY;<]}fb !+x8 :+O$iN`K:ݞpN WC Q/_'%-NPɢ6 ָW =SR>zb=|)t '5~G~HN 3v +S:mb? xPO f0q7Ľd oŽp2]ԣjgꢀ.n&UaD#v,{.`U糱O;WcFRs=d>/MUlm8?S2+u1br0liU4M=aSuNit|m`(+z'=SOx a'^}GkR.09L|X}M!GA|fJ& !*iJGP":/nt~V X)Ɨ4- ak8R7AqQ8H5:xtDH79β;C¦t׭?NCjIݜDO"^(cpt;oӛ 7ttjL܊>$~fTn}yj֚ 7,K"Rbyf+ϒH_|h#Nh\M>X<ԝQ Y9n!V5c* BB9{4^y@]UN;B]Ȼ{5vY&a/{(2K9))wȫ;{ r-W9SUj~%*s4iV?-p.fp!1B- Obd|Zc0~L`6ML`3\,~zy_|^׳O>,$JطE> h54_Lb/X?Scǰܿ_^"xSxt9 ey?LKC䰱VR(9foܸܸRu> HHx:*)'b\o* X($]r4|^Dba'䷾&HHƢ= vCZDnnfp#C.DL"~Hhp6'jx?UL QBbꀑ-xW s9Fւ}Y ʻEJ`wq8 ´Zu?-*ͤ2H"1B", v̐b gѿMyKQ*uU\ g6w#Xn. dX%lj\0Bko ɜy6 H]XϺ T/(4B/.ƥ6ۭGFM喖3oq{ LmhU8 !̈G˙M+kх&+xw3 Bl~@,0uN.oI bِQ )+4Xpˑ/k7dH-VA4ػm C eWnwrOB+ DTi /`H -x)a݀;ZjUϮJց} pQF:ϩ^Gf9e TYBiJR_aQEDƆeY*iˋ)cm}+נ%P JYf\Te4Th0 ,Db]F2vz3O]s/vÚw(>ۡ$P!r Im ? Qpq/MyQqP[`<`f7C_,<!zhs M D\׋|a&]-Nܖe=:PTI&oM/6Ѥ,L!&)6NX9iK LT-9"FhKqǃi,ˏ&AD0T@_jNȀ!,^On ,0`6ތ&?љlfMdݛhR(Auf8ΛTKF Ax~2,F20d6׾X!1οX5$3L7CadMyL;:GFp?CWsBbYH= gܦv7%8^z핯@)(La)Ú-ȖF ELvOS $Oa'a6#$HVgS%T ܡh!PvjBmYm;kֶ5RPU:v)$n[tBJ.*Ty'D ţ#Pw?\10fDV1jFcXm ig;N%BǵFтR篣5[X3čl߶hel:x͇Z s"IĘ)exLIlkO#! T-Gy_~^)UBL{?sT>~Tޗ,| H&U}G;]>eL] .9{ HeԂsPq'Œ!n-`ԕ(8`Y $\fH!<1/`"mƺLqwL ”|.d<-Tx >U >_xCZOW ^1WWEb`m3vKӒfN<] ; &H8`I^Av&wUuq{kkW`b@o$X&S!JvXI֥h'{ܶ +IUYfjܻ£AQ(eSi@v%[Yư`ƥIt49GԀL"md$XUսnh,XWm֗֝S@P Y)π٫QN-Wc%\,G 9-שYgᗍy8{BjPS:UoClNGanIeiLIӍmVq VyS&Mtf=.^s"C0nJm I2/WD*}>zƅ&Axx㻖OʒwO'-C&$J<|e095tD6e>,')}f|jU M$5F*ew lSɂf_><VkKjmؿv$Q4y\T>wD\ ]NU<̐*9.5|v8sXFHS0Ϝ|?|-mR4J\ɟL" ^-r]EܹB ZF&L)Xw\0\}A>"=] De_^9fh8͑pנ)z4~,Vw_ RwN{Vbܠ'S#Mc._?˸\G +M)\2K&(e@D0e a ry'bBDj}#jo{r ̺X<+xsYpЩ!+ ^?9Ωu%Hc1q vq7Y d:J ~cX*F`5b;p_/7 6-~.;y?khy mUxCcp̢ؾ^~G/mce]X]b(/4rdc_dq-GYY34/Fog ,ls3Z. ʴ}}߂{HK>Y|r4^qq?0 gQ}lrz? nD Ni.H+xSۻ(]$twkdRgj2Okm=J"d6Jh>|ݥ_u*m^DUgӎ jd 3 Թ&"z9! DdX o 7}Í>xSk4B-UUG+a=!m%^Պn9cיvFƀK)%=wZ1#;4m@S#c-2Jwn~?d& "xϋӍ5N\'& _Djr^2r 4Uzw Ūj=goxٲ52QxާQ jF: öi篜k搶*Q-Ek02K{"q1|Y}s~C}F`BSaE@ϠVUHV_e ;'h;ºêTQE[v lâGjox^!E~eEhw\Щuʕ0{v4 7$p$%Ujɂ~k%c? YD/ CbMX9SuU5!) /p 2LiQ39ͱvEpmM͵2$m"ym+c@)cpe EO1B0&8栥gׅ:M,i%7bb* K՘^CvHCU'/g?ڷLwvIZ&cPɝ6Y|::&}6m5mݩ_W1U1^I8}af v(U x(E.HxCmv7s n:cs:1m?kb|4bpgKyԏ M|G)סFHbod$HV:NCRqU\=n@A[)C4O@@ x( \'Xyd|Yݸv\ѣTjuGCN@3wl{*6l2$`Ŏ%m0ʋM]ry0;iBEb~L"}Y%#%Sa[G#O/?]lW)ꪌД/x/ouWzFPrZ >sҨkVwb} dR{ !wHBSm;O[خM=۫4Owr:>k7 _%i($2>'XƳd:>W c܎ྸg vv ;W%^'yIJ]MQ1̚ HyH³<3* >)! 6OmdtۤLPڀM.{^H=Q{v0^E5w֮wvT{b agY `H 麮X.ӄeu95'>LjFͦq;.IΕVxR|F/ngFpN{CyTdC52x~\h+GLʭK]^k#RwPN]}Ƣm8Ҥ\v AgtǛ>!];G~9#W0)3"X6ġ{2FM eBz52iKN.˘;XM0Dklj'ۼ}T>2vRS`4N\*,8m_7H>[*};urʪ=D9OfQȡH "NꦱYQ/S/>g$j΍um\NP0KF.(gEsxS=0:jvb2PV1^2x}N ,l*nzĚ]vVT 钓 |NN! %)I1J( PByJh䤖7uHLWJY?jov06 jd O>ֆ0 O䏋A>k,D0glD 󢣢ǘ\ST˟#Sz //0KxqN\5f·I9CEΊTTK#P'!Y%qDg+nDߗS]e׸_vhPᄦyZ2y7P,Y11B:'~R5d J„"칌Ê M{26EY^o2+9uy1+=kd PJ&{f8)[77yWe S#J.Ӌ_1d )eB P>a qҽxG(-&qE*>D0!|ph5_ݏ?؀cv6T71Oӝ}՗ws>Z~q0N2E݀-igf 1`d^ZRF /#+vXZ2Y[*X;lR̰(Io񤧱QQNA WnO5?PώmІ ~Xϲ漓^üCNhEzL̖8Ny^ c00&q:XFV_\JdZ&0@2cn2uv(Q= T 7WCS:n 5L=۳`$CÌ !M軘@ n8ѼKRbC˛CY Pd̀!Ak1*z%6[]<46e+N +G/u*8ۜF3dw*#I!q( "Um0P&vK0 edg'3u  Fn>*h2@tgK4'O{1 }* jdz 5|9+,i:tQE9b戻DP3T4ܒ&*˄,w8/'CƑ^P;Hi+sJwrVEZ[͋? W)fAאRL˼|$ycGC#e9{F q ;W"^[{uHL;PU0ǦAÊ P#c!}16m5T{%7^q}9CL &WܽFƛo7S$X.;h·ۍPkȸY3_TAs9 9R tvUB`6Ɵ T׬1y"RU,=|`f <_ƲBG;[P)Ç+jI4*Яn6=I.G &+4( wv3L Fat_ {x_/忍[n#?!zM6OU}i9^!6Aϣ-xک_W۽Zݏv5Gtbg|10F}ix)V9٣l)h}1Z7n -rul뜭94 h]C$"X'UBSӵlb5fsw)o?]"{XCtce$($Ƣk_&m=UJ]`Hs 1&|cl"e)9|^wl?T<{(:80a0 N" \jde"W!p4OH#{^/-/s֠-SDfQ``pBJkm#G9 ƼYe;"5ım1}luIZjɉjUŪȪPEjZ/3mveBk[̒]ԧ}M#c/޾{~.9wIO,_f h,:,ud6^m&x^!,MRhPN,iU4et9j^w#w^誋 !utb!?z]8˓Idn|or<0a^#[fb{r7. {gYzZX*'r#guK{,|~~>_R9jaw^qQr0+*Otv{jm7}9R3!&z,!՛Gj51Ȋ-z:&dŒ6(w2A@#}Zr7>xݸiCO»9orhwGVkIVUɵ-XuDX:i ];Fq}wG}&= cSVw;fk %@oipӂ`-D$>] _s,̹9ׂ/Du (H̽ަ"ӟeQ^.ip|[}0c֙i8V7LV`5yڥ6R7whWyeN-ԇ_Bd-E)<42"2+܏$m[ 7o*O4ev 6o+4(6Ưf[%m<$sfm}諨m[w7DL{v3"tտ ڑͭ8ʫ]D."sGmoym3[Fi^KxLI ж#j6go?v[x:ꠞ gRIKfm%Ah2a_%vO%m#B~s2ڝ<_mv-C?o/_y{1UEFȬltg*׾ y>"ߩ2Y-" fezO:dF2KGƘJOT2Z:]8_fB''.[%9 EefUEaH*KbmB&[2)yk\u*&9V2Uj- +z5 !,N'`N@l̠Εx,%3̬5 % BZw&䗲;.R&V&Q 1d*51cT% #9_/,cI6X uP:' " p" h+%Յ344*F$*:VKHsbª<dbE Y&6Cf-ZPEG,+YDc`=A/LhĂfes =29bwN/'}@-a줹 YE^L>dٿwǘk[A4Rvp05E| E:;ًlB簊ק^͐R+`H*}R BU>$ %BA硰Kҕ<$`QF&@U1ʕuIOX m^̀@8(+D;Р⁡ zSbfC6aITns{kE,DEa-Mb!Obb4&hSڦX4tEk ?_tU mʎ]Ǹ!F+b_ ېhB|5(J f [o-e!Fc=M#Z{zAmEfg]ēʟXa{#;}3%<4o|r(Q,>SMynč˃ug0 м9~;4:+F"чe9]FW@* rzV[Zfo0I`{R]AOfFU*P&'a@6&)U~KzR)ZAK D! ql2,  11[Ěky`s ݹ'~f3G΄י7w`y3a2b~c 6Uz_/OTKݝ$Ft}^=9;k7̟”VG'գHԤ2ɤ]grL~=64HZZاɄ.Sn1޹~;H@9b0T!$0nNM xu8o矟 J?X(k{q7+?2*{CKLP#:WE>H3SXv=v@B-2RLej"ϓ*i- Ra5G_s.r3OV4A-'|AZ bx#y4:_Au7𓙔Ic+.@Ԋ uUVK4k{(Dz:B43`D^0eŖ*lMrrXȽ[5i;et0{AU Z$ &55H*"qklg̫Jy) $5B>XJ\#9IZem[6I^&@oX=Ytn}\;oQ8%;ev5ni fG7rWZcBo`+RoؕXތ[,o#.V)z͈{89ԌgI=I@;ܾP;QiP.V6g,0;C)S i(ĝSގ{joe۟]ʽ 'Lm)sUīkxZj\!Krս x) itrVai9ץBu٘ԨX`*UT}ITZh^Lz>ذ3:bC0\{V6ïX^]Nڜp'{'ypkEh+nE6*Nq*d ޹;z-,ʝ`lǗ^߬._dc-O=}zFUrr_n9jyƺ>%1X,@諭MTӒHRѹ7ߚ~$#/U`נǣ}?~`B|Ryrʣ'kSKcpj1GZ-oqppDs%-hua#O$##KT**' # @tќ#̛kOLSY!LȿNczw~@gTԱ xY<8:e0 eׁY))[Cut.ŇsoGӹ 5b#VvpFNѪ/ ŹTC1MQ)-:4{]OФ?9D5*VzI!z%HB[V*C/.D~/׫5!c~]"yGZٵΙ ?nykO[j-u/[1Ϩ't&,. 9A^nfWR<)~{9AC ?nr{Kv_(oWy[0Bcтq6H0ϕ G! 9 3ﮑׯGCOY9aT v5_w=oj$֛7LxПS)n:%ŔXX0j19k'z9F4ķ۾7{hSSzț?}" U~/;=ѺRL>ݿt|~t|1I`>{e>X[aզ8$paEhi<i-AᔤU僠NؒLL&[LW&T+:9[y2/WT HBBRLJX_F2Y O'Y~OTBy:>XT*Fh-[;Pv&'wm8d#nF?b)Ak`˅ 10bΚR4"T̐W}l.ۨĔ¾Gz?^j?Ў_ DEûbg߼A%Ky6R * C+AOI:%"Za`%2<\~B9f{Yߦ++,\n;*q ?߰Ajt7  ^6:6[$=aι`T\O@җV|\i6d((.-i6OyA.֛sKh3 "Iy6bYCx щJE֬d}ASd`Kfˏk6=nf4e STA;vd?^taP%`}dLnOm1 H3*AWk|X׭.ֽL ekB9pĠT}n>ueeNH"NS9;a8!l<`)pP`pͤ•Ph)(R+BXMFKRF¡Wm 3?#5;lu<8j"\9"Lȕ}>Y.![DY6`)Zc">]<;ʡ/ xS/4iԋqjלwuIß}qo[o7@yt=נ ǵUed%N234tO ҪDO ѩvq?'3ENhHެ*Jsb6CyM3f?.mdas L 8xSt,"QyP~ub>~EynU*i+_F#A2NHCiUK ͑&S/4"N!k^Q-Msd]%Tc9e!! f-khkj6f^}($dP@O^S/4iuL.eb'&'6Y߃Գ` b@}}#2AaY}V Xz*R;P<5)I%>3~2oJ"} <3J7)eY>5k!sHiѶTNC4HlR9іdwY r) <>#z|Q?ނXZO.OI Ԝ$N*jrC \|H6d4)+)+1r,Xaf("+"U#y8-I˴wf`%m(K,NG#bc-+u}1IP PҜZ- WIE?%qPwd}=.JOy}m%SJz6|W{& W~`DѲ)S):DEHȞ{k qf.:7t6NgCt6NgzL 6wTÆD0RwDq?v ?y󠖠&OɺsM]!bdp6 fDPu]s rEf'zKq MːOZvW<ynm%PkZ05=K}oNjR]|!VE_7$IXi8D8HV >vZPG59֬tR T6e[+UISH)]Qg 1qYٚ3WʹI?$mAR'9v7w(YF ԡZ]?Zƍ!J)giR1lXP/ВQcOFE22Rl(S"#TkMZ8t*IcJP$9 }VNץ^彟z!0/jCj\<z&Y}Hߒlm''FWJ$cQyP#ݢ,X+Lk;S WWCf7r[p;?+73sf1oZQE|u^mvY9E/W߀w0=_V;buejEk jK)Mqb*69[nIh`izZ38;K:")oo]p y%tc u2rH boWmBsgYONC)A9y3~Z^F6VfA!O6b+@.!6j:[vgڍKJ0mS/4Nmŏ^bȜJYcm?9EdR$Pڜ"/t* gHF.9CźGl!3$]k2$^Yɣ`]@3$//Cb|R3ߚ\˻1$ߑ%hZle~jK.5AN6}rbao`iB;U^|Cdz;xL_Z ]zܰ?X:oXKڥbeJe֐@ދ<ʝr .ԇ`dRccnH p%)2A0S zҷ1Ѐ>Π$IWFjP9-/<#xk.򘶯-oVӮʵ'n)dR@<8>7Bh'TV*W:~𠖾FU މ!95Ǵv>bdK3y"z9RA'~z733Y6dqu?8M$\#=a3C/7"~GXiSuG"u@ .ԼOG7L-bs.FQnwmq;?ѸG=RQW){j5A>$u;U@$ *r qtČyD<}޴ڛF0Ev[-Ws71*0$F9%rAViw< b~d]d׶['YuZW?ZΏO ӲcI7ى7`}Du0 /Q]ZSq$A9CʛEq?޳Wa|Y Ǫ1_>Xm}?Lhv3ǿdm7i(KS20UyQ9hbj2QjhRCh'%X[+ٹjFIFF,[0 uP46h&`V +_t9/As>E ty,[v(i5<^Zlʷc.+0J 43@!:+ rH<Z[6"r|c}s9+FA#;"GlkAX"XA}|+o6*YJRW"ːJ5IԽcҵF¸Wʆs<+ z ⻐1琽aI R%FLUӶJ0F'![9X#a%6K ,#JVg?Z[`1!ހk7 gL\3Y=䒑tpE7.Phd z'wwzupΘ _ F1y^z@ϧA|Y +9=!.4^oG"iV74)Tbo_(vtY gv[wfFZKίLtGJ})5!r! &29E0I0lNI(Q/_E{A3cYoXy؈1Ƀ|¦`{\f2aøbI0گ5Qz*y:pPo8:y:Kgq7)'zp8桽sk5+[Y;Hu95 rH?0)Jv S>VZtqp]1y zgTT`CaD4ɬNF$&C~-Zy0]򊿙s=hMiֳf`n?2|˲b|7kl0 GS߷z}kyiCF']9)PpI)C(n=HwRjQ{xժ4jB4Cn:ɠ5{) KXF hFgGH[rl7ڛjU\2(*6?So^eMW'}pk{eME2PMbpC.{U,A{%J٠Q1lS@-R5vJu ט4,X4 -1>zuݾS WϕWIև$QWǶF [@T^Te"ail o@Ȗk>tw"bxLq\~\z~h0rTK~ @#7Lq1ŝg?&ou@q-1|_nz?wlFV'&TL)*ե(Z( ;i r7p} ^R aaY Pc&Yk^ͺ!WaByPt{mjȶk[P>W=$&oB cιcE#%cr Ry*Gˈk{m%K6I*t֬d󛎈>w% ڜNFL#zo#-Zз0$F= mo`JwRF?sg[*c3j9b[Iƛ3ir.ޢ\1gJZkh'bkzLviNJڱv'[; ;cIͰ)$FI\*ڨyJg..ʻʌ*& ~kӀƔTM6E FDlI( E޿Q<1$ EEn[OX5L!cӗcKnFvJݐ6l N]FzR =NjY;Z!QGwD4r3 aDid0dD x2%#ٕǜFو7-Vz0{7X#S/qoӟo Nv(0j/L{$3_Fe@B5PhQinh:mmhU;.z;#X)vN ;r(j|,vdT!3E')GSc9+'f6b\Qep#=3f*o+DNs1`GD8N? 7V̭SnC^v |k׉:uǝ(׃5ļ .ӡ"tW,wq=XѦI#Wkҩ@Ns9:G~tQ>c ' rHOfY:X'zL&eD=zz9;-,ѰSzϱqu,FM XC )V&߼y.eg#DP@kVs7DRF 8Ehdшeoɨq5m4֋0V{R)kc UH!KA_L <ZqVQbet9NyB)h~,͘˱P#@Uۜ^No9{dpP*?W^ͮo2Kv+󬗙V=d0<F6E5K†Q/Ra=0A}hVvG2SFפx#Ӗ|ޡU#UNw\L 1789xNG2kz{^O %򻡥7#`m?ެrz@^;?] Swnp~-#2]/Mi6i<#wbI #'Dԙf"mό>Vir fAn떾 eϴDjȶӗOORoHyQ3UH`뻴#G3 I?VbRcs$ F8$6$qI$~#JQ-Gp7`;[Fuc'6c62(:'߀h`6Zd6Zju>حZ.;Bum``G M63#FظU7j3ٻ6n$WXr{W;^!xsW{Nln0fBKZQr_P/cr(QDNÙA7yF8"n%G}GhV(2R^#aKEE,X.EDcvk^Wx]/):PN }$Qbw iV=R_gc[5*;] N{0V jiBkJ̮8ZF :1&;_ȤCl#RM 6$Yʺh naѲlsI{=zUh#/Z cHN=zTG1׍\uHKk5/X~ȪY=ܑ/\ a%ݠ!Mm,5X#CVl*XdH:6&R]KsD ¡!G ic$ Yhhc=1Q 58Dؾ &uN&DUK1q"JllZˌmnt! be6/3 4BPtDԦ Ҭ9u]2uj,IdMH $ܛ<HcBL?Q$oۄ<= dzCzs|-kJ*bl)*2־1MX+٨ٳe&-AHx ٌ2976OFBT _/`A_ F~ZV!:ȼəZ9<%ׄذM0Z0ժ5 5R *8)ٱIR{7Otg_]}p:x?9~ǟaEidFTm[D+&(|Z`I炼ANxX s2W peNyFvRY0&گN3f`#ȆMPSxʬ@{UJcIW749\4 ELP9mX&аl0ӰByee_bKēeZmQ7s%7VVf\B2WfpZדed6,P;\%cViћkkk>L1mՒsY-{k4XFk$Tmɩ4 )wV#+b HI&n|AQTPUaEa值zHƹ:XcR &vIJ&}XzL?vU\3eSW46hA%΄k*GF6"k/ԀDԬ|6Ȋ}@밃QpsCCD_2' $8t\i48Sq M\Ey(˜G\qIriJ螆r1>dY;Rlp/Ϝx+5/Fu UƦ!nRI%p@ l{lY+Y-io׼dU \OWr@kܸ+rN- yW-XlY)+pAJl"^22a"C i֍'MvԢ⫄rj[Xݮ;gHfl倭-ebJ$W]!vF`lłXȯm׼GhH0JF*'n>|c[%rbaNcyeQ;`͹uUOpK 4Łu`}䓨]Y::G0vU%Y!*L)Qy :#u9YJ=wb'QSGa OkXZ@ -bqQ-Jlw -0B вY?+Z6(vF;q,[E#QxJ&"Vr4㹋#v#+}G yIy<ɡ.⨈ȧ`'d!vOK,[K(BFMB 7FB;9wV[,SyRhW]!vЂScw dn (*%Zi0Bo#Y'2前hLlO?|3|u]oL<+1ֈP[#xؖ`rdkR7O6)?u>Q=w5%wwMЁegPݎ;vqer倭B9!h儥Bؙ5͌ -K$vfMXl݈Hiֺ5J3X13[$ItLY9-3X%qKu} *2FwYH0eFNjב k:ǀ݄@^phG3DZ>|up9fٌ<-- |Fl[nĀ68+4fffUw YvyBR *Sz?M돣O܄cu r/X%eIJtlm,k@ib S/!u:zC6%9fB.t:].c{+x`ۗy}s?8ECax(:pШ0gzbZfs? uI9j&Ǔه4/5 t ll*ϳn1 kP/kC 6DSMDBZHօ25ܓeBDj'F[V8Cc<p9@bV)!R>=P3k-.n-5`X~} UDŽqXk_"9eNg`Rkl皿PjBh#2nt$GyQ^5>p˰Oe tO~hyvHI%Xfo׾oy1.?*glۂqR0jj\h[=X w.Bt!5|\x%v;l--Ӷ|~FnjlN:8Y&O!*5ow, Mw]G ޳j>H%o(Rgع]$J^:,h?13̋1os2;Mrfǣ:>^| l<זxO~z p=5ۉ)4m3XH %;jS0@?nP`tλ'k+/hd,]Gj'+rpRvZY39źÈ?kh3.g;G.E1~eyheWbԚW;M'W3_=M+?WaG9L()0n jrw"85$ e :9S -!l%MUԵ-]K^g'xڌ(Oco3͛{*8R+ofԿKֻ3osn3Θڍ:9=|{<jQwIrj@;Bh W nCـ"& IkRQME IHNy0D~[2qވ!/)dͭЇ<|r!}Ⱳot7s\X@+{ -CU+ZM֍#YB/nvW<%$gNi;S=$!E=/Ķf_K}'k(ѳһj+Pm8Dr\Dc|x-};kt/Җ6y5诚JI p:D\Xf׀)վ@[WTZMJnUry*-vJE7FnPGR>_?j\N$GtMU|bLPybP/ࢤ؛s1ܱ%!< cbRq9BomF AZ-T$kqTT.hea;-{OrZݺ4 ƁL>KkM0‰$`q7 (Ø N I fbKvCB"<8" i5PŇTF̖v/NZsRvvcg4;v>ۍ8j6Ij%0mL&g`y1$0LuQfIs3W]L^Ayjߴ՛^k{թ]]߿~e)c% Jh `A8!LFkGĕA3ŵCr[Eh7Z*{1H7Rq7%P\j' 8FC< I&*iRt;Jj-,_CY&_B:P04x5% Ov&mBV9!z'Tݺ3Ya>ۍ]κtf{04 Y$OTу8$RA[C,QE;i)j07):'i'=U$% 0M(4xBZm JV#J)R7YVr *P1-m)-p8B*D $5 .nWC5\ҧbD(6g(Q&ޫr: lժCg4 m;ETAzu[{QRZ{8Z;iYi?[Al I B]oWM{+8`ȵx$x󭦙M35^n~V_Ԯ4TMmJ .osJ.K}DeUrc%[|ÿDIA!oZId%vz28أvu7a<6I#v0.`pvM؝+;{Xz۪vQO?4=޳$^>.=sfªU9dF>2Ax{1-&6 e.Yד'K˒c'*qA/ow,۟xop|PkhgG.\%ty"KZovf󵈜!Sip|gw!&ˢZ?!.B߿ݰau+똼B޷_Jݸ?;L_3P|{>ۅyS6לq>)h*bDrVJu$Vj`)\,=/F0{AnDs1dm4KPm n8eͱ1o>{ci÷E3QPAӧ]N-z;rHPNYEmS@V+>FyS@V+45әRF r¿ {HtÕw16B&; ^͗}OcggKdAhO=ZvΆQF$H'T;fmkGw7(Hq1{ ߐe-%99MNxٛA7,~Z{'"sׇQ>KU.XF޻{g{m ?>J*<+WD@s{J(pnf8z5xnϿ<'n9ӃNaʵ .ldHheM;eeVV MiiDC iiV&:n=TAouL+Ò4V*KPse`@1'?d -I(K D .$|PRvkJ%KcHTr *;'ʼn *pHc䣶քhmrOLtQKnؗKm(M2RJ"Dd*ΓB CqŭAR:ɔ04 *T# jV!<)\9f/}5z:F ]m8FEÍ`"Y,R<;rU%kB",3 Bc!",>Q2r!GqP'{"ȡ~WL ;4p׶fJNW?8FWxNHF~$RhW%9dG}0TゖTj?n-4n{{͗fg^]7j{moS>Ux"I'I4j/H9)~MhG2LmT!P HS ((jWNetoa7'՞~vj|} tU>5Pv8W* Sl9-(Bys@j~P&7. 0}>E`IĨ1;EFSh D)F>y~ oLMhF癨0+C5VZWwlóN_Qg iKÛ"l4ouFq6L[Ut8<⌚8\B .ș FD:}Q w E+ʮ\UfP[G<m$QV{cpa ]j H kw Tp:U9봄pW:Ll`mR8. ]i+GEW2Fϵ AlD!H79]Q:G5r껡H"Pd`h,IEB[4jAq2ɴ$UhxtB_ ^i̚ׯI]/ S"t1.*q-=N 8=W% oi\J %Gb{WÃVT O'+.̾ÒZLX3QsU]bA^)&I GYJ*UK ܳ1V Æ, B~m("N>q,Q,]Dr{11[]`^ kC>_o&/A`B7Q uJ'`^'@kg4.#Lprpv `T좽-dU9A1Q,g H#ƣ+ )U~݌zdENAQH<EaE4rN}j2:ܓܱS>|Cqv|tsdc>A/MfvQez:,Qc?]Lwws00kJ%%ۏ~6!qFPNWb6Nl! ߨTB)RRc[M5BW׼mbrk~YGѸr<|TvR]4^cE~EusB 3r7^"n˷&#)^nM}T9AzJwE_NEPT`TmoAIpjb|AH.0gŋ:A u0O>|wi\l :";'inna et>p8烷7_>Y &}:-Ќ5p<'W=p0 a-ؿCA[N@Hڸ`ļD l&nN%F~`Æ9 a%0ʥ[ً;={ۓL*Y*tznE؊m`{v^ҋ _)g!Pґ Pz.qM˸;##a=sq&C2}`UvȘG TT5P3(G8SA3&%QF+ȃwVS$ ԖxHQ jG<4y"{Eu3Oם0|Ii g5͢Թ %q}>v㉞</}5zVyuG?sؖY5%Ң l28g T$7*7zOP@ZϘ "P$*X#hU DP&Oe}H"j /,:W2\r4*#H?PiA01l+~0Vώ?}FQŷ( G?]F=!Z5ߦ&kOߞR0O ^9OiU7 Yeq΍:UKs7$&Opqn3 7\vۃoFkC&=qK+쥋F]4hFM,ZЩ-y!LjN hG4",=%ޢIIb4aT"ɄrOD3Rq֨P=f/Ӭ!C\,3t=dz@gz8_!=h_qN"6n1E2$%K1O /!)pH /ÙW]Uмg3 E0Ib, R -) G]bB~%B~g?*-j=yjt9v$o5zcǑ.mX ՙYIjqPWaSDc$LΩ+usi^CLΩ3E&IFw{I?/Z;hr6+DDzx0u-Da&y;AޒIR]}DϮTرg9F҆^ҟ̥KF;Η&BLK/J>~;Zqh"}wpT9xBǛ_>q'jjZow΂;92w58 1_xgg֫L޳14=JRz<&/ RQS|F#צo](v0 \OZ=B.v4ⷁp9I kcՈN.({.ʕ#%6&L?/0R-㵾/: ;Uk=Ijg:됞nr-Ոm^n-> ^OG٧ޘzr5s7wIiz8u]Uﳹևt'9w`D_^I˫ w0o޾qԽ#|bov׷? 8Ŭƿ;A!>]|`6ƌBច|?ؚ^Ao z_9kwQ|eW_g?M#S_+?w L`| #^m@p` B\6z*Ņ0|S,1[ Wb_Ng6ӪnX?1Y+E:" jKJX, _0!tk KQ(ST0 5[Sw8w%k:/^j޿,y^h ya&|J)0G2)QGd̜+ *9e25,v1,#}(޺ѤUb|R)\I}d LOXߞ ˟F,jljNڈ"%`$$UpkF ίg%cʛasyʧzݑfeTp z7ģL\ ,]6C%ٯM# \@j]MZf<^P:WP5)4d#\R y0+r s}IxJPT1>H5& s%2\ʐ\&lYˆf$pA3$$TB6j.'P;3Qu*w8;TSܯNAJc+jCO hj9 #~\s,b"¾{ڭ!Zj3?/,.*[T11D~){SHq"A'pCWc3$S&L'eb0G[dEO![gƜSw$Aw.< ,#/JvO{IEl)}’OKަ7',y’',Y+->{B6*RӔ (cƎ&8#3N’"&Ogc#ܹDK ؇;kup W Bz \F`Bsٴ7q/_$cPs7YEH!].=u rir$0cZ3e^ Y(@p=`LT1I3N\0 v_eVuZ?W[ ,װfgB$fkpp\YWX„6 3h5S saY0ȜTS[MqB+[ro&VT$[0de$爆>RdbRL,:&gJ(0i&"yHۮĨdҟo E鍉D@jίs%Fm0 SXYp^d6@^1#Z]ƖH2-Rg2 uJE=߾d ^Dyw.\ N؟UgTvCJm|Q/F x⊩MD/m!MiBiS$Bk?>erA{b\mlB5h3A* -Pf )Bej fG,7e?zsY/SHX4"%*DvwuFNt,[ĝ H}B8\ h(B񃋍i$&@僘`7-` +*mK5f7u4\ EkGE\Wo$*&#%TYMMa(NU@簟AZ3AAg)2P %(t)) KmO[D*Y2u&9FhE<֙ fRr"$MP=㊰&H4A=nKX5T mN$7'|UŃ܋X#T+b+cmB(UQ-#4zG^RjI} Wa5!*e>Wz]NmG]|^5yP*P54cqZocauC@F4{DGg/4ةQ-鷵 {h));\sIYe>`i ERd%q@d[.j>aet vZ# u\\kp=^O^ >N&9& Ю;JjuљV5[WL}(s(~=M |M,H䜼Es@I!/4h\4]1}5&"z1tH3pB sN%Z 0ǢL[gӛe(91ɾ;)ރqƌ~ޣX,ۮ(+۴OX(LD- ks"ر5j@Rt9BCì(>J_9XVڤpq@)J0gg"vD"EϢS.R*̤gNY՞R[kIJA ;PسM,zӇWXO"eGJUetyg.De<^hD{)M1 E(5Brb8f)VR,Jfף&ӯ5XcL+])JIL 01trVPMC+9Mhd͜2WIbc)<-0"d_wm}a+3<1 Nf>a$Z*Z >A%<#Qe7 :o4##Pr)fc 0&$8.e7OBz,3?xЬkX:`B H)T(GAyύpDYLm)ƂlTk75{@@9 'pﶫd }~NDT`N&Y>6,??v H||h>޼}pah?l=\8Z-hRP[w7r89?|yt68DCkÏ 渍F^?ǫH?)O8e.aD!2͌Oc{%rFgr2̈ZqxR iB᱐HTz4Xm: (E܆umӺZ֕k* 'EvbD :1K5aw*ه+0h?-:LU6}(㮆p?s2l}JDgW7W_=|h 2o7.W{*Lû' J`̣( s,xLZX81Ԉ =_/?+Ƞ}h1P!d5i:hr6O],Na"8Mx䪩R"Yd ְA\|6-g:+oL$rr5AvXJCFYu6T~6d~8nx= U dլM[A#} !4iEgz|* nOWI#0$

xՊc^ci,V>k@iHqHg`Yu[hnep\a8-z J\;hsq&] Ce.;M8t|agv۳A@'t , 3ρP騤CL<@HRǔ;TΓ6aȢUސb%%#.֢#8U[)[yPj}P| ,S! Jl?)%!\Et I: ȃ*i| 6_2v:JhuAC޹69X~˺u ѺFuZ1ĺ͘3 J nch;W %W5xb"v-|vQe 62SIgcg3d+9!ü],U.y X`.Ů>w7zRw],6s硅r1JMB 6N&XȨDR&GP#Y+]Q@It(%SjZMZOʞ`g*|lV&l68L?{ >rpwVULjTztzq*nE TpZZTGjJzKbOtEo.{*<31/kMM=%(t nؿԨb;U5!>r0Ulށ #΅)r #+R*8`uƪ@:k|UU;e[usW 9~I cC ;i&)Fu DJHJL[V֧5^O<0w[&3ZyPi}8|VeJJNGk*hch;W&#-}ƌ֭5!mN8ukQiАwM:;;Ռaۺi16cP6cM>l&:vAC޹S[6bb({2|PCop#$4d} E])#CN_G층ɲDu&IQ yN4j[O~XuŒf lȢ:IDsY?|i,1FXcϟs^0.X< ^㖝LV+W ϟeJZ/!'Z_wnw!AiwYVikxݏ?7+g6"Ţ{0#;P2.:Bd(]UXmP}:$l_2NS$!OOFI^iMp<^aBbuۧ<ޘ7 ř9=pEbzuq#d-"1кO&&aHŏk985OsX#(c7Nßp*:>ßt1EB ɨ"dQD2o(EWw{/::#\21[d#ZH>e1 .J cr?km 7i*>'+) b?:oPdb&$+0F}nqf|16e)tJtU h0uJg0^e`4%gɑE3&yc8i]xqun>}+ a:s8 ⯤I:SI;M獳;\[I01EÜh=Lm9%ON@ԽIb NAbaP+6Aym*]*o'kC'4él oTPL%A)uĴ:!Mڎmj$x>cҹ!dT ))N1-tB'GN#$fӽo0R+N{)y[dIBC$jIT_u9E)<_݅RZzy[f-y˻u5*Ͽݞ^|VB- ^j3;C9\߿bp.0~a4So}[ʈ_7ZL]ms8+|Y2_\ۚd-3v+`JrT5Hڢ^lI1kRㄖy9OQ!  1nK}RcdǪ;@ MS*刋8jiF=٤YwL}4lA#$$5<+`hc;hT3ܺiT9}=9JCLti}^MR U5f XЌDM6Ad-3QK ~Hq!ce7D'eqư4C ok-Cp ? Z˝%Aft<L5 ]Jo C$m4 C V<PY+jg(T%AQJ:O/v#)  _y(15 ?Bj+4$'l86Yͦs.?8Fe$ZZk{:o֥iP)a'"+1Jjoa$?efz5z ek6t5*4\U|gw/O]t' Fdg.M,ŧt yF<2:TylLerʼ˟P~m{"k khP_h*3wxF?n,mw^xwy7_d]}/r>65/~ǿ /R8Tw. [V o'׋m/#PKWZXk7?o>A'w%!c;] >ŕNpHC)tZ&T* Kf܆Xzs/defyal. IV/ͯoWiq 0:y ԭ "lL!֘տc&b4U| +|?͗fv+}xգ|_N Dwل^gy/Zsu:G* !hB)jV( /LW€_Qj2y&ӻ}[ݱh6fЊHuЛ ;(v͚SZ4\!0?e[Zb5|'e; E ɳ ?:+RC/%pp~|fᓘrV:U^ŔDbaRb ? !u F5LbS/7ň*]!-qfcjc,8[ܺBL΁@Z=P@N%'c .Œ3U-VƗBOxBl>1y,`\u~]-7ܕ]u{y=/^6kaM s!͕kq /%.-k*ncQN` RBg[9j|y#6DDsd¹֚5JҦr,{'Pcc>1|y{o3̂Vaչу߿9K| ,_?>i |˼ɟk׊KCj~%t&p_\0u5[dUJ􍪒?V3Vn)yۜ)Gy+,ܕ;4ʟ´\h~"|~uL X7sYևmxxH sU|eSî& " dBB#͹HPCd H-3`BK^'+=\kN D~kdO4J)24TQDjkQW1q[eqKӫ^) @)<.{FC7u.MoV-6=qL{Qc0s t:شiԥA6Dɽf-7NQ}Lݨ>򝻣P58OLp z*C=P_= ^ij*wTnzp`bZC;&EG  @gܷxz0=5A>M[*~4_MvVGӖ9Z"N!<RIZD`%^I HxqFw rS-E8 T4}0^5ӻ!wJq"yB_jf(›^.lU!x0 _4__Y\ 6Gc< ?έ_=mc /wsMW9WNfa޼&Fc{L_"v9_\~ͥ' m,BH3(b=&nT6jiETtbZY5 !\Dd$AbPEtbۨT ޙvhUքhX NluQRŜ[@j&$䙋hLIznyNA%щlv;[:pZJպvheր8k8,\14M.fGom :g)::wډ -lmBrFpEWA+ m\Q`I`)(s=r/FO qt9){H4_ Sڼ50˻gywX+&6j(L`ҵ(P 7DM:Ecp13Zq M,aL [cj7ƛW -K 6CYP֦Wemk3pE)k̳ v᠗#pfWNqV[K;/HL~U[2 8k2*Hv yRK H=/z )%рT`!R\ ))J"5HbKk04}܆x_/֫=c[;`یuWO,k)=33{ t()$h84amT5qMCܩS @B qNΑh9rTE0` VLcLASl~Yq fs"τ"ˣg;f~ #A3w yxkΕKLxHI1ǜTcz̰QcX2ƚ NRJ9B~P$(Je4[C'06cFTs4B`ciIBlodh3]M"lO5֣vl7zTdv T`udՠH3[L^^Nߏ-IH,=vfm*Zy 6e 9>]b*+%gmX wjI=JAkLEf??^^ k%Npĭݦźd[MꐇK="\5[&WM{;GxzLs*,IKњv]gS;ݗRVi7ET9tib,͍x~ ]zzzƓ&cKa۳z<]4:ǩ Gu,tjPm(djVz4^qr{f}uv'l}>5ur Ul7g0Q?uBT/!B9T"#G|ɡo< trAqZl,D_: `퇾wVu1sf"JQh?u. NK/?ܳ 4B"ȿףw{fzPeuwx~w{9B#s^l[;t'tN$~R[w{} yC3;p͛8 Ň?94(UW˛ݳbכ#wjdVOJ1<-ٗZi[ ߞg Uh+3nՇg%h4^g*8a:]VR].,2[s\-h-Xzu UjQuUi6psH)_/$&luCa eqrSH?\L(LW|t@WW(*,+X2Q'/gn%]U䯧!u k o L$FP !=smq&a'b6T'Z94Υr &;|i Doʬ:C!PɀPCVγբH!mYjP 8ɭ(+D-rn٣^VUƶQ95+YaxDZba&$䙋hL3ñPT.t3"R8σN{+JحTR}pok2g{ ' vrw4bGď-ٱISg)ւHcF)\OΦE]0FeOlPo9~lPj3eOz+9UO'!9yBl-27? ޠfՋˏfy  Ͷx.%΀M)^¥sA_DAu){l\+j#dI@rkO<*C@9igZ.>-ގ|Gٓ~ʓnsl;OR&S4%Y+wT"^9%qҵ~k ~sVׄAii)ޡ)_KD쳿 3I f&Hi[Kj*LSou &4LqzI}|n T(eҩDO`J)AO%q9LQWVhhiVIʍƁϽ3: N%NRr"m|Z¡b+:y}JH1RvR:c0##lb+)ŎaE 7@ Ӡj=$aK AcH{"vL`SJ]|ک gd rXEY^n{RоqzÑO#n%*JLXS+;r؈ofр#JFFTTjc ٻ1m~ٞP(H iIU8" J Ѳp |YI50D,G'! Pw7QB_y-wy cs׉TrUʒ9jh*v[^4UƷt a/}ΔhT追HžbۣmzL-zZZLmo:-w@`J\(@#Ď,gr} R# w>Qd?7Ճ~yOٲ@{ժ`[jC^S~rK>D'+Ro{zDvD_\r*PcPjDI?v@ε~N` FY]@Dt:m7e\k6[Ӆ܈|=#'蟷iZTa$ oǓOM'MX*v+^5b4qHk)[bx WɆb T\N :q@B2U&rC'Vښ(Ne[h Vv5(ޝbvo÷GњϨ9;EV6%jjg[%xw n?S{KeMnTNuAEzVu2kۊ̖h` Ҙa{և[gy,u% APV? NTEwEGP  ?׿FTyӄ Z[ekOiH>3gp\v43d炷dXGgCzgN/;؛9z1+4ZN𶩓CڂDH[fWS7-*ޑĴ W'!9Aky uq2_5|v>QwO+)_|Q}ڃcDl+͓iK}:3)vQDǬEt'Dγ͒C{D0*}T.Y{Fܦ?ݭ0/qp++xT0܄,v/WaR{z)Ur(Wau.dnU?鑦Vz%5K,PzdRK@_TOf;%Aqtkb`8ЎR]ò|=i<ߟSE,K<;a/1cj;S3쩺ծmޑ=6r9_t@εWjN؏|[Q'bHp@O:@c7U2MbW>f l0gvD~0tc=e_~{!@ Y8n|S -/+u @ͳO]0M֚BP}! @/0@V2'n}܈SA gfa"&D@Gý?uD@GZMzIFW ˨t*ߚ)e2LeCܽ+KQR{fӦ _rԛvzjolN; #Wfo#(Ɣ9"Î4]ÔJKȝ#^p’_@hΒ +4G\8$=B%H眓 Sz/#4FF8ʤCpJ[)=!_ QiS}=| J \Fó_dmOAםn'8jukGH8Lh]6 @_|2q}b>īqrߞ0b$[ :WʋvDz?EÃN17Z/O^0UL4z]YYIڸ-ml,ѬkZ'\Bgfäj툦ۤ At¢ VR[rTun塛J 2ʼnףedզB{)60Tùwth}&WaԞ kևBC<(Tʚ hsR}]SN[<'o]ѩQ\Ɍ"eqo'JSO{DJTJ {@$6y٥t ZH=Xo~{$h[j;9<]q-Zb!:!Dh\Z.2A503^xj Vo /6j_،d$䜆7m6G[.vM۠jO+?1 }e\^= ;[u͆Gf×OZLrr;'T$TĒXDX0!R“Ǟ!dB=<3ͻ18!xiՈа LI'kAzky[&IFGKDz(ԯ1qYt1]iQ[d?|v@ Zxf%Z+T2: Td;A%bWJ{2]:Q@fm"Oj`rg3"p K,I 9MN ,QࢪZt݅](%a&?Rvg])'SDG <=А\djԣ`9HWc$lˇ.r*(=o'^x"џ /Hn$)pN/sЈǧμA t]W[a vy#JoLR|VX׺(j)xµNM,,#=̂25ć9bn{KV}Os7Zw̼sXo91>YGw8,g9ܚ)!s(I<qcdT8U q'pPs@8N.oXtsfyMA 6U:,n]|R/rmޘ!C0r48$vw21^Z@.l!H2SW搅[朱/l b8.9߰Olc(t=AIEp0~=qMp@99/ 7Y!=3k)LĄ8״i3h#a>+Icс:,/LnOtd_fLnkBo QȫŞb[wӛźԸuivւ4BO_U?{[S]?CJQb) A0QBa%1€G0^(  #*cPt5]Era'lEZM֐¯ʋj9OK!=hiZ'46j۴t@Ú2^/M'=[sx&Pmߎhn k{F>斉EsFJ2$;^{s"g8h^OJc. mZ{6F_>_G$]a$ZK_]e?#d?wly1|3 P̀)hz3BBhE$hG.,ؿtd4ϥqFC Zԍԧ2F!t~2]UV/c36(3eaPKI>R>Mۓđ ؙU: x9t}֟\~$ CQXU(FEzծ\Q yzdJ?=Lj]/3]k?jnҹ7`9ɜa#=L*"1O[=B*J 0PPbO{ODžě c̢ HRͻR)%D tYcR *ĠBK fmĞH9!a懭@PP MI$'/=2"'j9^8"{a ~ߛ"mz'+!(@T7)@U'v%ejޅ.hU+?I`C-}}g8]ΈL'S^( M]gt\ųY`Vշ|z򜾰fSw_[7'~ HB$hTP T{5X) 3.\z%2 qZLKCbIw#j= ΔjS{! j}Mu:-br ܆o{fT on}!HZ<9$0`hB\ pɇJ4;O6^ i:Las[J"r]25Bt;gDtҟs_Z3er i~48&p8i B! <_ >ĭɳMU%NUElO 6Լڜ/eO1Q:toq,Z$ wBL09j[[Kn6=LPB(ULO>`ξ/dC  _>uNƚBcA|Hgnϓ绾XłI}Y ^}[OoW eRixt?ɑ}J=h <wjA0B5Yj^z{Y<8yxMtM%e0ogy|y|k)NJ07:E=lٺiY.]Mv`1FEdc=MM WT)j ߱MYƹ>Myztb"h[ nnKfj H8PƱbqM4s}kǁubvqYյ&ᤈ'Qf/gtRvmզw1~9?2W3 rOgo^R\Ǩ( /j)c^c_T2B.漦&ӳtɕ˯/oСA &O{.+^:KUܙ+TI[m2Jd{Ґ\E7u Y'f =nлu{;G{{ . YD_X.׉~l RFO K PDgiDb*"AqI *@swkE,I8 IT1U0Fx~RD4}1b ͊zFBF9IMu.,igj&wh V\/hEE@$AF`8IIvt2~McY`@`ʸAʳ,R-QҘ(Y !re<Rx4B `8Fd~*;g_h\09; w33LU];?tB Ѳ2&ް\7;jp^b]з־wĢ7e;>]XP[=ܴ_^.]N Id\L H8Ab鼽?W궩Q#p4}bNF1kv[7jciGq?WD9ܪt ?qF~j]L ,vXsF 4*W/wxӿ⾏>c." b7{سhk?mf_gEfΑ 8<i9 TʂY.s'1gS^ E /ru0;8Eҏx|V&8 YFS ce18KwBP CK(&P y\$Ci0iǙ_7R1yq*G:ĸw-WؤŸ%sӡXRr-3|hW.:%ݜTK?vKܧ(AotC` l\]fnt_XΖ>4+W-:%åPTv]qi*k|v_4cFҌI3 ),k fd%1,rļ?ne`L vVݓ`7Jp c'׭t 5]'oo?Xh؝9V7ߦW\74〰GNpi1*.i7M H}/4sتp?r:!s}NiÁ?09`XjwL$uor QQ6y|5nrp^vA2{W'xN=U=ӡ@]Vq37D+9(f."|DS0Jƈ p菡j-02ev[f;8p`s&_ h}@2l[kۦHwO$uJ2>UC)K2C &0C﮾ r?am6߮ |CtC%%?- {0m"1Aa9SL<Ai+|E)E:\lU8k\Op1;082v= oQ^Шaܮɟ'߭k+'x~cZkҷ]A54?Bib2ԏ~Fn1Pt1Gd(ǀE1,TByQ,%H0$+d 'ܼ/y~3%e7&$ [.oʟf΃VuؽSvRK.vǚLJ|HlڹCxyv/.fBUqz33vB^c A=經g:SxibN(E*2rAL238 JrrH9iMf}㕱r1 )A2V\s6)< 7Y݈,|p؉3流n8jScl6#K20$%)8/pNr/UBeA VDLNøQC$+Y%0V VFY3ɩP" 8&r\b$W*QzġW1B)[%ݔ,_J hVWnVX׬b?$z'cOdU,xiډI_ό+*~ p=gbW@73P]{L} 7wfr]uwDxF){rwgD921??Xs-=dPa `}R˃Q8BRB.ܣ0D j-=d*Bn)liH9!i XmENJ7coAomZ"On5c(,J]n,{Epe+ߕ_b )4ӴQ}_}k_@(j,ECbWŐγEp|4䊩P鱬6{QBXu2׿.,ҙ-,Hٿܻx>":*c:X`L]:w?w-c ̆kBM[E$u+,k-Lj$hui8 N^[v +;Hpǜ1.hvHt )zl~1I22܂y8s'j R6(ч'aEJ8% t,ܩp{NA~_uQOEcWT)9N4~ &}>jˍ~gZSp@( 2LAѢLd3]ަ^쵷sg@$1Nu̸( 1@Žq`j8u\чFבr,xu@ |:u#9Hs®̺tL/0rKUQ6akGS~P*(9ˋHFs c,PaW#l 8Dѡ@[ Ti1|ʧXz h&ǚũ7tgpRrwؼo6̲42qh aBei <9DŊ"DOqQs%"Ks(H*$%pJ煐 I .2&9f4qCR.H3%{$W>le6с$# % $Ae]SWي_xqe+ BAPqo 0*[հWb}pR?6J wJAE~;]]lI(6֎T e#7A Hg0$f'[Oi(W%HHƈDy jE{9-|ǀz_iT4y)"UcA@KS# V|`]F  FHp{lV(zP.8K@W9R Gkvȴ*ol`lj|8Vǥfd拏'f$hց a[MvxHqcq5[opvs嫏+2bzaD@9z^L{ϰmC߽2O+77%|z a&-O˴xHuZ@P`A7i݌!, _۸?Ehch>e9nz0#~09,?X$nf#ھGN{="S1:bQ߷6>o%*1ΠlsƥwS#爣Ұ? n4[Q֗]CyO= t%=i4T}g}=5%BX}61)hy{ޓ!> sS!_)X/rw(gKy^{^=v0ղ7hֲis`>Ɣ!g4-ܙ9|] 9 ,w`/!NDG&j6_ n;ctl ylϋM?}wvzq}9ۼᛅɜ]`<{XH(&(w]ȃ|/ƧyJL$X~>V72YJPhsW6Rhl䈌_5e|)C 5,q)Y`\Nfs9 $䡬Q:7FӇK_io.ӿ.|goW@`Qzy18D25⪙B˻szxZ#miV3o>_[y$$%b0a{5;Lyz1_„M4)Py<=/U!11Z\N1r-6 ӦήN@p:pݼՃY1Wwtd΍ӑ9"/E႙,^1iȹmn7fۓ]NS?A1 fN|D4@S>X˭hr Nr C Awg{> qgHI`tcx&7&p9>ո㨆RtȨTE]vNyܤcѮY3ΗF/w78u`t-9TօnӔ/n%I+ -go C| h i~bn{9KyCgaĘ![K;T1N)ͭS̼4 ǠR⩕W+xI9%­]R+$6̋R S}? _" 5 04lgY)卲dD 5QZcIx1a34鉠a"*;qqnN3 RY0>46-ay5Qd |]QsD"kQDEB ≍b%4#V|ڋb.IK?%%2KgL`̗,7Ra&hIZ֢Ӹ[9ngK&?O&)/Lm4_Fڠm+%K y-Tmtuyd ֭ꔶĺ݁NpZ.)Zĩ שv 1Ț h仍 Vn:`#h 1F֭>g{G4b– {{;fFԔL^ۓPg T)9&ԘCSQ=II Aӿ#N8u \ZXs 9ٖo8bc\Jh27Nl+t9uGNN &?X7u+ :m4nySL;m:`#h8ǹk4R rSF6*;kݺ`#hnj/'-Q039NGg 3逭4j2B/uDkcAHt@;I%-ƔxT20QoG$Bb4:kdcɶXbC6X1Wb͵n]0S=qd `qw ${ͤ՝, K /J/~m"ɎŽ:=7u vI,g`Х$DEt8C~¼[WoݿNxށA'*޺y?6GȽTqjE:gȹ̕E˛ iO^MIBP@O%Vx%2p;ǣ RYng`ؘ(`D%jQ=Da(`N07QGώ+lT^T3Ĉj$5LE՜(GRG6[ČD~aLz`{)z{inm?(ŏUhW陿d^P忾ysRI"r_< " !g_bZ_3` ՟B՟wjbzk瓀dKaT09~{vogTj!hY7w2hўNL`O󖼒Uп̾C89>m'dѬrMx:[7?pسp9x"&4HE:H/p\!j#QjH0~>@F'cyDD~NyY}x{5<ýOο؝/>"s^o#'/xC#1eO_.|iK| %Qcҟ"  4ӫ<(1soJW 49\o־txH@n(dJNJQ l"֣Т]?a4Ag]jzK-u'm'`k3^ێfK;uoLH?S[tR TA4 A}*U3ׂ%|;%7_/oey\dEMH o^EAtg8>kaa{XZZtӪش`E3.tbRUkv\"89D2797$*_}o ?+- 0G΋z{bJqsfλFjͩVC 9}i9]DpV!0~߶|&3֯u07)hApfg#NZ0MgL$ 0*.,V+ 6' jڲťڲ3GH-JrKVĜ=cxF)b^@EִE[iiVDŽf X38DF(IA !Z:g$:s0;1,rjñsl_dk"[uku4TqC)4{,rRP_S$64瀵UX-@ Up:1÷e 1 "OT-Z,~GmߚVBGؾIΨ9Zrh|yCM 6 csS_yȋ,Y"/D^%r }M8~)<N'ޡÓ/wϫ:ÑHJYi9P;+-ߐU(jĘA:?+H<ìE'bTr!F2䞦D")q )@RZD5iob*!Dcц v#F@t_Y{XRT!)j f+)EBOmj"F}C!9AHNPrRK"*nw\%* _JMx2p gr/07(%kF9 ;tOשqluPRG)W|e9_pDҹzwj8J6rjMN(`_N]vccsJO&ƻ6E.wQThb hJQPI[' $Ib,} $KDq*UہZR|C:L4A:woAP_B;&(*7s7A}&%RfHH8h Vƨ\\4&t!C<Ľºaw0}-H "1nRS:Ѭ98a JD0 ^ *'I4"(h"P~\9H J q6" +h$=ʝ@lF1q^ b00)nBbdVz@)fw edZ-(=KA&*(7HL+' h&èr$@E帕 F yQȔN;)jλrLD9$R*:N=#y.Zat4 & ^ 9뻁ԀH.hBbhF1 :rQgQ( ͷ@L|[J]TR1*uChCwTc^_9 |aR:vQN@-R4pL)ZpARFє)'ZOk6hqJ˓OLM@=c^t]F! 4DV-3X7м)2 u0Xa}E䆵T uܚSmye5eoӀаL ^{XoVRNJVzIaP)g=ӁyS.v;<|j0q GX +BЋp-VmDfCnHVSC4AJ 'fztᓆhP9L-_x/T})Т`Up0WҌ3HNp&hHpO!Lm:zL&9sv3Ş}0cY/Q!Њh1fTE!Z/g/e݈qu\$ z8Dr\hEbcd$ϙ́A8ME90I>,HiN' KiH7І8;a_ sZ4Cְl0bz"r\:JlIҀr62fM]{ҫvvN!?_ٛxi3e!Mʯ'5CzSXd\jv%N;x' /w}ux,u|;Gw;;=W_CB U]Ő4qӆjvw{7|(/oZ$߻8i 91cX) {w*T%6XY!Dx?L柫n'n^^x '9aܥḳ_w$bПo_ d쿧4<.~<[q㓭:1snӃq4QGh 0#C_)=1SldktWaȉ-u{/?7cQ*)OMpӃr4Q;]h\-&I3n-QBѠl1XF48?T䣽HIDg0m=jB-?gKX `(&' 05z+8Tq^b5BG=CG^MW(D{m;Qj8GRt7?7∆yUY6(Mr3$)y'Fk ̈9H V9<κ뜫6ƯS jDM<3M'/eZ‰`ڐ-=rЗwE :nmvsv}{s9ghOο؝/>XF FH0MIy1O坻.xaX)PaZnnWzaSgScnro=xɇ guyH'Nv~(Knyz~)]|VlS{dFB-VI v5ܟ;ə7h h0["(龯d3*V=&;Y rn7@ õ[D]^\HAxҢu.PƋW=^DX2N/ (yvƔ [ms69G<^ʙ_"sjcWr﫫C ޭx$d 5_WLɮt፠d=5wإsoW\58yA H]H "sGJdCe&:-+1bRIgQ@;4R[ LHqL( xG5R+s # @>j )Vuw^P[oG__v5V?g|fg ))TdҤ_# x ĦJREGK)W6zUU{t5@Q^hnB.#^gZ^wP9y4ﯢ\2+Qbz:%GpNr_m#'"TWޢ> p ]y7׆奈E3ܱkze~/֟m÷uАg 5uo O|#2 _tor18Z3&+r njixn1F2br񍋐A~`šzpp[E6B$kEx| 7@@HC^gR^a֡SQ{uF# s.'R{4(e uG4:ŷx@54&ԎKQ曫+ۏny_5 |B=#JF5ڏ )PG@z9PPX)Bo'UӃBaEaʰ qC$qD0 l{ʍ4bưsORB[ow iopd3F T* P xU` JZqqmb+N|e0t, 1w,LY5ƙ:R־؋20EFp *O:kV'{Ge`2ǸǘFRk FZqtXF0@ûXep z)kU'崬b/5 8t1Ip29A8⨼tpP6:ESDN`i ҵ d_oI$*$NQl!cЖTO![#mj]*E8)Ҡx3-TrIJk{Z'`fL_6?o/_ݢtpJ͚z6Q4`A[QF :{g!RڮJ!~ ĐN,PXa-rPҁyբZ4̞)tjlC-绻ImL+]\{+-RF!ۦfJPPo;#K^s,۟my-Kv8#dSW߸h[\hbu I٭ߎ| E8sI;~0 'O!|<=tltgtS_θWjoq?dUg3ޏlqyhjFzk P_o&Y!7}6@&l`o^y{a.3Oo疧)J:ſف}W:,t=]7[7;xq6o ͝.Z/c_ZoEs/f.;-uQ:j{l,iG2m#6V#tiST%zwiQcOR˻ @ /m>|Yѿ|#0r0d2aZxw!KH84[mܨn>n*ʻ} ~0qc\NKʷ >)JLndD!cX 5FbX$;hV,oJWibMY>OVXǶ7vhwF7k<.ف6v)a5JZ]ގ$rҼowJVQݶdG܆㽭_m3GP^|T;;J%/D58gcF\Wm+,y ꭤ㋕{|,Q] Yce,o#kk08w|.[%zvbDHuSV4.:[*+6} YqJU%:\%A?LJt9GNh 'x᳞u4iEN; \7Gϑrndֶw=NZ,cX.+`sJZqylER˖nSK4.o[rf#KjTwS6JjRUzﯸSi nGjJgW;䒼q.Vͷsk̭?8sͷÜ&]jw׿uuU*-jOC?@a6^a<(>+al·8/$)@\cc_Hҍ}$F-"{;ttn>:W˨n-{^iecZrtpMdW|t&CЫJ߇?  ybNAyT#@Қg}|554B<{2g~}<"̩F"'} n $'"2Ro% ;1-~™f~_ "4ǼMȉȽM\K=++z D[T\~T_e$BT6֗ N#HYI苘o]2}e4>C_IyJq6k0SR j9utn۱Iف2 3C6zuV0lq[wEgN9+}}l>~(%fK0k,|}:O;n'3wWGR DA;zCߟ9-Q.O0HS?F}Lۇ/s:h4׃ bg`a_ׂ~6c)'B{G3_t.l|s~,/M(tz=8O;o o o o/_vau4i0kx#JrQ ^'T ͂&>|otdb4?u0V êכ{ҦPE\1E>DiŮ A&K;;rRl k*5H$12Ab Ġ\A0x 9d2x Pϊ (ل! bccL#Ւk FZqtXF[B,z.7i6 .Hk +?1S`!3Qb9%v BQxd ^Nۃ- YCt2"PKA ڂE$.J_3eaJ9,:R틕q\u 6Xk 8f;"t29(gIRyx2 Na#j@%Oe)fpy,s%b=0l0$H3lM a6h l$!1aP8g YD"҈y _7 ̈́r.oI1q%'i(wM_[=쓥WZap ,;߅K=N|kv~z Yd|3]__PpF?]Yǩ#- AD! J0`I$hP倀&; SC8qޮ͇{ZU[pcoNRvh coƩR>ZcO8T'^2xL`#eB01>>Ҁ-Qb\"8ıK,7uы5\X~~kv~k!./R\_T#tVFX]_ŗqz~1F]_X#u~1FbbTc//R}En4JEuB p8? YI~ "//&T]_LXpu~1޵q,B޳#b@X qM@cTHJϯ?CJSR4C օTW}U]F/&;/_*,+[hp[UTs!=,Orh\JNlNՙaҊϿ[Zည֮vcSR5ܜju[9 4!խT I+Euk9՚0n 6Z ڍkN"R_Xcf6ܽ[Nͩ {tjكY;9xiPN.| tr:r-Y> N&W8a{7d9<0|:6Ƿ1J>cAyZЛ~f—l?2ɴy26}̕ĮQ!6"Xއ~heuk˃:iu$S=־RӺuh+LIXu\A}:mw$:}YԚ֭ 8D`JVz5{hT%üK߭%jm-ﶳ<+T#=߭uD]߭8D[aWDeޭ%j%5Z>c1In/ֺ#nޭ8D[aJZ7`t>c붻m [n{kZ.rm)^MTҺA-|ZmgIj{y]u[9p”uk˃:iuLܚuknu!"u#Z<XǺ_d֬[I5[9pA~h$ƽATgU3]_9TL[=[3!VjZUa_מڻ*3l(PM=йSM\ycsesl%ڷ3@B,VN5AὛ$>܇ץ&0nP-=[gSMg*.5sMm&:^.5A(nΔ7I8ۻeR -r9ט;I՘U;r9טGWcV\c5N5R5fʼn5\cT8Wc.5n5A5a9m*טsCMP 7d5fCƜk̝jrj&pk̹ܥ&p՘s1w \1+#טsSMЍsG1+A 5\cRtjJr9ט;Ѹhw544טsSMaYIzB1B0Ƭ`<טsSMrkRo|B1DYjƬ0Ɯk]j_Y-r j̊25f%XO5惨1Y֘5s1w )5fMƜki)z3Ko$V路cdr5xO ܄qd8L A}ƳQq>6c6}WA&dpEQ,#B)HQGlK|VOoޚI`s&* z"SFLRc A`fQZ s,tv+Vսe!@5"Lp Xh"S)=\1,ւcϕ$i$eaNY`T }Ȍt:R8%Fk8jKڱ,$qL)CZp"**cJ QcJ;vivdzJ AEFВa,ERa$Vz:p|Eŋ#x!/D5IhMi 0 ,D(h@/dMʣVOYWY۷lr=udhF%f'c \Iq5\yݸE!u BS+6zz>(/B@`fNz16,|hb`y0XE-te4ċ'7Tjru}a@jqC XOy.4%a6+fjWǣ;s,_G1,q#cLNI(ݑC)8vg.5ðT6S;|ńBxkFlx(>{>|*YX.Lŧ-̲bWZ)wek9ڻ뺳⡍55q${>W_Z,KHd?4b&?,!UxKZ%"h{ȹ>fB1uP0+i)+"Ń~n{bU:$46j9A?#_yvPvHDٞ^"ksF#Wʝ>I nEaן̮A Ϛ5Mސj5pwGEMX Fx$rбS:!amȂ|)Rn]Y)~jijfŨv]tk`ceڅUM!#DҺuu$9&;Ϳ͵NY͊{_\n%Qk=-ԛh?d52;Ybւ4F'PRS1*ŌP)gδ( -,q`q&ēr/:":O~V MzC a8[oϺ~Vb>񌗭:`g1h #ͦOW(|NI:x=u ~3=)\q87\{xdxy|{b  ϓ駻 듚ՍTYbz644` )o֣g [B);I7ow?zqz}~0WZxt _+wތ c~@1RjgK'$Vfv/׭gBҌ0]YNyKQV 2ubC{ePD_ãCҫ$.Y7a+t~xyVJa؅ݝÓ4:}:X"Hm*Sf AtT-JW2Sa:KHtݷ-I<} 2Rdx~7|{ӃV+%#pj@ @`l.kh-=KMd ?B6} a R!+cG uKR ~ՆObW \ؕ/GmBR.\4$ )WdQPFH1<2Fi6ZDhuՕO"/ɕWGA/ɕ2~+$^Ձ.]ؽ+P;Wj|W'Y,[8,㏟hqsaϮBxeVN<-WTpn~e|S?:Jb@Ht`BQ 0"V3nP <pSTnscSBRo(If5#v,#~R]- ` m;bgPm,MS1#βa+3YZsqÑpKݢb\h<b 7JrВP%NJ҆&> i-'|B@ خl}d؀mt/ :$]_}=OMʭ;b_J[^^|/ % ۏ9%i(?eˆO*qW'C.H9xR ~dEwЛo=f6{&KA5'BK}?'G1TvC3w翬 .=`T #}ݪ(ۊAYYru^$UyqZzO+#Qƃ%VE;O;#v:w,'AqܑNc) [ǂۉ盈UvocUޟU/{BΫ':Tw}~?uk-}>f_Qe.QH g^x '8-Ho&Y4R7[zXN=s 1}K̀r×Mdl+,GV&7rVb= ߚ?&wʸo[WW "^@)" ed q(ўj-=pJF`<.ꠝ!\YxP%\Fey.!_JVNaJT# ::n(s.xO"K8띡^xʰ׊i` x(5kͬ;0#8Ik pytOcpV^B*ի?֣S]%SB& J&)/L|uI4 \D.ѦV K 5KwY,En“$']դqA«*I܎\,Y0e Gઙ o~x3Dp8+uƬHpYI3B-BcR{c|j]i= [RU]ȼ:+32/<" Bvdm (A+4kf[_P'9uP'9uCϦוQR筶rf}ІCZq(H|j::iC@]tk6MB6Q׉^V,< /zݘ\{B !UzB/S.P]) (V;Q+nBch8`!7|LW3 %%q3iCȷ# #D< ZZEv h$&'949ɡI344OB'G8ߦc!#'NbQ9OQG]'du|oSʪ]v*KN+\SSYMƧ4 hK_`IZBʼn 6TF!2P,GjERАP2Z-M6֐Lhl՜dӜ#ڙ(A1UNcLd6dӬŢf4[Y*or?h+!arTctMGЊmei|e:\hZc3GLʶ|H clʇEY9 yČ>cNx zxN(iPH2tD8vpdf'R{Z=MCaRQ6qvEU٬TWʴF6hpXb,{.ciB):֊\L[%H7 )C"x"1"ㅉJ#]S4)pFtG8qV5g+Ι d\H(r"H{]fه,X~x];1ZumN EFЃk'{u)tQbaUc@TAՠЀc}8åghJI:!hI:Tagc z -[bT 39-´޼u)|igJ"?]UiȘs_b%'6%kѣ"Ok9$A[STYL#Uk-.S_|P}6oY]̫p7W?_r;,[KXvFn^__^^_ݛͯei-[p \.PX5z?/>!^Fj]/'}7^7tkvbD%(A"M~|DsA\<=HzuzU2}8%M0E7+] yv X*lmBB؅4YEP9N/C/VƔ0?BpΥt Iz $OGPlʏPMǫqT*Nrx';zrE{Fd (S)/"bWOڵʩ!^韋 2[%JS#!R# ZN$|*2. +l!5H,w)ˣ!t!BG=|G[D;7\Î4 %Ϙ$mHђJEMAƩ)dGR'HƬKF,)8sZ=%\ʥD@KI#9N,M݆24_e"/⤑Xw`fUD'UAQŭu%QHJH i\6~ZYș3W-93C6g}}g~TlD}wue>P{ Eh$V6n^Ƨ}JҜhzdbV<8^ pT#PdD qJqDf(Sdq71Y$x_ h u}m8[+J1\1 'Dns /ܬۧ(Jģ:l^- _AH:Ś)g2'W?9JW7jFS fe`sl$(ylDvn/樿fz)O=KZFBߟ깭Ԛ\Ń|Y&À|߽EWEuFѾ"E77_I-׽T"~5Λw2A>.p׳ߚ(p)6BlgA}(֨Yۓ%:[[ӻˏ:]2ڿ^~ͮ畷c?w[a/>ݗ-"=YY3|W)Ę.T>Zy*s7܏*jlKۘOvĴ\ q"gmz ǿt5/GJT.K$kT{nMY gJO#ʌ$AhHj#{6 y.:dݛ Wݘz"(/w#ir7Yw2ڬ22~4 "Z*6 c@*--wZ0Zǘ1%LLy⏚cj"ƚ-%a  |@eϦy^pbj^5+pҙՇ uZo.Ht`e+߆$|ǰ Gmp!BC6ȐNK7hzj8KwUDcCDEqeBCz_ a:V E38])nJϠ2ǪKmtEmS<p7oaFsWU"X>Ֆ9V&M +Cp/nNg}- cl[OgKxW?袺qخZ^k@Fέu4Jg4Z%p̒ ]HnxaHk;v#_n/{B36v vr/ pfN 6^*QdM$ej8u8Tj]ν1x3ZY:#9N Z m*烱:Nj 1d"\z:g3/n=ñ<DxF-,ś"d!bP)G8 w794 {*E5st#O6YanE]Yߣ궆_?'nt'1N{>mk'6yfLUL?{F/j8xT?BKgyhRǴRʵbtuL1b+Gw$zbEKOl[]g;ϊ'%3C\*1pI폎00:'#nOtjf ޜ|@U!]R.6ϩ+]7ޫDDBeUv#Z`a|{X?(+sRf<Wf o?s-oԞ.vK{S\s">h.PtIiTp?QgS'i8jrr%v`ה  w3avrk8Sꊎ'T˜ԼgTNg/ȡ/ON?|f4(|Ztֲi:><\4tQiʩ vTm:!U\Q ԝi(SĠPo4~2^ T2F 4, $\L4=#a><`AYC#3>ܶ]!e:`ѵrZѦ ^>D jn0H`< LjKĒ߭,97AiԞ*:O2?x>ePZ)7G.j捌o/H n]ĭPi䓜F>i䓜F>ig9 8&N6zIi{dAp;՜RnmT<wy7ok^?^/EaDGfHzܡԄQgCy=h~2{34%4^65c/J Nh ~[ UJ89-PLWL쪭N.u+@Ξ8Q]߂Dkr<%ڬ\qw/{J?t?QTQ8߭}k 5d@cdѵ^5vs_/$$$Ԥ)RS[k`(e$xg1DSzJMҞ#ps&ůpW~uNlgd.8gGEC ||Y)$ؓl`O=iK ́dуA|LILIŤqNHGC;Nԍ;N7C\Twf^n|[b14߶6h_&h۰vc:Ԉ^ݚLɒ*%pօ @\~ٹLEۊP*MAY4JL4y!D``QȄwȑVb8m̼̈%Ktd3f|l4_))&9:jTJ^RI-)/h_l%e\eR*4.֒%){<{MJ&)n隲Gb | \PZx(xH ejW#,b ٮx5c`sp>l'nKQ  A+o ̀w.:-jSGT XNM5oN5߉3HIڟ3owVc#\M~&a+]Eleva'Ap;W1䘬 㶪<(8e-t#B!zxAq|H"`Ժ ` XvZ. +%5 .]7&(]1`A7jxWBx@%ָwޓ`U֨?idHvO7׫?n8 :sOHQtk[@nLa/(VnU")l͸.jĒ$ZoxǑˤ 6xW}t#D u:RR)S J$.qԤg++F0x=|՝rF6AL\a[ רP\_˳f|2넌x|+TB!-2P FBsK0…sˣ/rkCp|vzʈNʝazgp( Ci4Sg 5aV9\9/PjQޥx7OeS\[h1{"/f{ࣀхaA),BRA Հzn9ʁ@c]xCjzC6e!Fz%ք9UAR,2us č!@F%#DZDnƟk9yt07iT!sV39_Fp2/7Mзho8Y-L7L=&Ƙl72̆g'ah>Y}f͙^2e.kfW?qfn_3 $f鎷 ĉ@O!er孿_cHw~g3^wORtOթ@9[C'"*0c(,*OF6'+ۺx;? r8nL,g  ~s╊P^{lPz\+8[KҎV{CvǮ{E+J 07Xki>λKl靖qᙏq$ X\AO JqI]<@ -/P;[f"#qc,^eW+3DE>n =ĻYk}BGbe\ɦ9L=86)m̃s'bʎB'VMpZy>xH0S ?Í:o瓳f_WmiwT3 :14 )j%bɷǙ]MRVg]nt3Z^z̆}W˶TFʪPGyJj@bg~UibȐeCQֳ{]?qE 53H ex?>ӏtPP|P0љc۱ʉbk5S7K?^qTa؎JP:UT=QsQmK*sG_ 4ˑA!0ʼ)GkJ [sYqƴ<xݍ74~0_W4p6~v]_]݉%t !{щ#Rj_B,%pQ) =3yoAkVjREK# i)oH8fK f ,x:"uTy%R5ip`Jrg!8;b6!6(eJneCCj !< !6+Bub 40J!c2@I&,_C#&:0k%DacCwv rc 'CD=F0YAn~69{d.pd5 %(mc'Pz1SA"b0 XoV0_#q";鯂fJCUشZKf@Qz/|b爐"6f.WHs9IoʁM(퉯덴zI>NI9 O6Ur`bF^[w/pSY};uڥ^Nzv5Z.?g+##U(zU;.liNǥ+T>x h\/ntx1m.kqw8s _U!8 Bgı,g<)5 [&ph@P]y~MLl\[&%̟za19XJD/'<OOby~ֹd'ٜ> TFzFG;1"vhI {5#@(pwCU$!l1]1᫙٫ZY"nH F) ,F(e']}ˏğO:?e .J?z:|+MqR?ͬzy?fLf'·O_0;)<&q8d̎O[濗N1#,,-77pm:󷧋$*9ἂP:H1Sk3_4t-]b]-+D;kEuhS5 ӆ,kLvy ;d===Mnv=H^1/J"'m& L`7JG˪KUq󆍕/s3y[|RBbK,owse-d< ̹ROT*{Μe x1tKP] w1oùu=KlaBHYH|pC&xa}%{pye }O;d\~^)]vk2pfK)]#&XlF樢Ivξ c8vܥmQkh\?Nj>LV!vO>˭ɚ2 `ݷף*Q/~^+P3dSJarP mf9RrϿ-?t.QUwZ8p@3wT0鸭ꕐr[0 yCL9!}CdӉ&ӥ"7GS4er\G?ZqD@=:'=J՛JJ:Z)y8Yj)LwC= BBVZR|JcHQA:)WuEYp-<\1#JBg"KРdj\hzJa0] .2..2.뻠vr=J"H*v'\ 2 AX`-u6v<|`(؁bcq1W\ǒ*g跳ElSYl0ok4Gͮ|ϵ~\kH̖dڍ9IX?v|ңzXZ6ev/2j˺,Ҝ T I)e8mVg(&8wl[-Fkf1ix-L0 hR"] ,`2Ppb--?O_{(_Z1D28n_ t,+(ELc!R+dw@0KM ZI(Jy4M9\)ܑ*+ht)dwA۶g(|n0Wly8Td*Jz]J]sk@5-8K(v?;=ͮUCsbь[ZX.i5wԙ/[p9Њ{!,BE4Mﵞ۩g >?{Wȑ YSTއ=ۆaό_<$codMaRVGFFFF̞W}[U~խ~-ZZ볧dRk839g{~z1z:%낃jwKw p_ά߳^)vjs{uڲT* }Z-Ci7[Յt;z;[8.,G `tg80ZSuT7KzzsӺJZNwԱnEK{n'kڐ/\DddsCnd'Iq{n˿H^: 52f|9/S 8K\-]| t:DWRu9Td̰7uWcO'[<>O @V-ln9,E3ME R($xX}?mmp]f+Nwd6/+9#afT^i)IAo[-o⽚lG%>s4ݠY*fVi@]N APQXTLp1Tf`bTrmT:MKܘ<ݙ%XIU!SR \vj}2/7 #ln{ؚqrEs'6D:4csxѝ9Z6h üf!@m^Ƚ*"$&a 6`Xli\QZU3n؉}Ƈq,T ErV5f"ù8MdT֮n;>m,0Dt |qg{7\8nc<&=NH+'ϭn/*fVVP*JviYqn]oU\-y!eY[hl G}¹kGS wJ,/YR2M/W62]k{e1ݭ 4Ts[ >ky3U-)OiaO=~8V#tM >5Bzf\?F7%Q&j-!YUz<5wxO{-iB#Uw8Nԝ%2&x Y]nz3؊$oĕHq;yS\/diVj:Y}c)O6T0qN`^kk!?'iշ43(Vb,49Z$x}T ۀ 3 (Zn9+lFA3Zkr #H"y5R}d#cQ6/]6m~u t1hgq笕i]P r߼1S4xFȼ!Y@*bj8udx8!QUbvm͜k9h}jX R+1  { #N*dpi1x ДXR^#"Ti&SiJ}^w4~HMKBV 8`a07|$}M|pCA< ie@V s`MbMbMbMUsl@32m+Z=[z &>@Rcd9nF_-B-{if>xfY9~.Q3'=:0F%)0Zk;ђQ$`6K)8|]!t_I֨ c*u+їHH!:u`N3G)C}I,22pe&1v1X%y߽ /HTc( L2;%;O{!WLQh%uhOC 1t00QH ]vQ 4]I[J,F[aФuj+^XgS-#ۖN{L#}T4nҹ \m^2ݱn 3FegI0ads8 ܹn#0N ҰZ*(vSI)qHqp? cc s/)f o}1MEjS{12.\>AkON)RX)]ҳPՔ8qz t:A]3Lz~`V(OIf'i.x}🮛,+Kx1JG/?ӛepmT J"dϱw@@b(> ^$C?U܍m9?81>X no vt w`eêȉsQ@ 5*B ` *m vj~[c=E+),{ƽT4Q^h]`"F/0X}`i88_B…gd6'"3[DF 6H䝦>5 ^q5-FS 6 [cJ۹(|MZ7 'hFC#[b! j Wq{08%l6Čb>%7v˔xc#dEOXZi4 A! 6j[ ˔k8ET;Q h$`%He]!D2 ꕐ)d[tl(g`R,HK­I z5xF`[cX8bz<+c)IjM_+ͯr8fR*egS_ѵf"6I  HX*EKm۽*edq9Ex_2-ZLJjI:&~햚Pc &T a1Roƚ8. ;VrIFtMXQ^8-:&ک\KP#F7sQ0 S G`)FTkyDeT͐ ~kM nu3JJyF['wqyC4iSտu]I.}=k:!I@m/P8Jٙ Foe%=hY-L}:!]8go֠ahY\W]u+*}o} 7In2$e*C:k2e1NۈyPiIBZQGgy ]$A::z5寖y0Ԭ3eYH'3AxmPD3act;'cnZ1\jV=tݨi/t5&H9#q8T7V`>#5,+Ψ )GB|G^\Q?!Psʈ ,$h˘r#gǘODÜJ^`z(5e)C "*N(fSGNmp\O8 VY C8+s}DXxFjA(l1_\uN`. ,wہ<3m>WBk}Q֝Ҏ~[c0c 㶮5ZtQvꇂegvV=F@@zΣ# #awo?y~{X =NxŸF{̿Ws~O~>:}c~ہ>“M$]3 IɳOxOQ,SGviu˽|N L+Š+7>R"K]!&|G4>ä;t`::i>gv9_ Lc?u.cBj>$ud+"2$̌+㎃=SYڂ0a a ڣ~tZΗbPÍHxDإKvMxc}8QU|n¸*u>CeM=Ԓgcop9|Vp}wڊs/gi2#QwS^Y ?5y%]4yI",-tz.zݼ/X"z% 2{JtAskz(!V-Ŕv|u[a ??E`c 8=1RqMyۮǵgxvi\]͚&fZ &s~qTy{s=t. O:e /1% @7=97+U. ph`Dª1iL.'(a]j+cJW[*0~ޣP k&LȦ#{*0(:ObBf u RH$Bo==k)%\ s+zϭ vqnnVƅڪW!hH@,{!$J;ę.tzi~?*\ŭ^кj DTp 抮YLY鞍$yi mH Qdy!;!ޟVz1œ/i4\Cpv9%E-w'g<Z|m K.s !y08v~oc\TOmCzBmQ>eWAhD۳n`lC?&{ʷXnѨ4#8rG'Pfn5(crXj `u:7ɂ2UCӭϒ7F%4lCԫ +*'Lon,EݲP4!{̘[ڮ}5;ƀSh+mbA}>A1R<~t LMiKw5c}/᳸Y`KM.7 l|d![k kYlh+MvhOFO/95 ݇UvIg緳jJ3Ň~aoTlxey[}S*)Mi)#(Rϴ+B$(Q)abBJmv輒2RΕ@ܮ>LjwOsخ{&enz5)eJ@[^) `jhK^լרRs{Ƙw'/??Yq#/1a M2 %i5xȚFȪYl[#k 3YzZ}y),{ bKY9ad-qyzN,,:׳=*>]^viowgOY79Y=F-/Fm;XRӟKp/bZ(/)b}J[G&o~|38v/K?WP Ѿ\ݕ-+[#ݑ4m|d=0iaO κ r|1dF2>'}D ͧ2D2+cD(uuЅe ]|k9:'U;qOd8POfNMÁ@ZvqFe^wVIJ?d׆N? fږ7pcJVJ5*6:_<% E}>q(+߼ \Rd |s?W(E&m4p<]e?Aя~AE_0tˁZ%og/Frqg0ڟ_ OQ@ϴ~ oA=ў8K*;?9)6fr,KȤXr.%sѬ &Wr7$Mv߼yݼK-P򳼽_/]yx=![7M 3rY[Bml[Q0IʚYg& fEI]M!& Pb~L[$ql U΄(V Yu6B^8+PhBUu6EA%S\qm2 ʒbeS'خye+m\PRizx6ԛzꊋ|5ӏ?iW~i5mG=HiZ%ڎӋts<œw~, :m%n~kcE"R~1!قrX@.p͊T@8Lٯֈ=1 tZ6:{.~.鸤Q$D.HAg ‘:JT鏝(6BYBI=;#W+X&6U(ڰk@2f/G >1(ªq`pOzGy7tmFQNmT;Q>^pj<ʥ^8Q%o+7-gJC&j3$:NZ :gNo阥h3JBhϘ p9$)T΁bS=B`YjETLrf%ÐtXI!Ąe+S:Ӽ\ڤ98_{ژv3 mHM J儓q-68]PF՜ & eA:FBVpN5t>"[Ѓ8'>)tv6MrpG|o]g"Jz_.>BC iʴd&RNɔi4 埩>хg?_WazGwϵo̎9z#ZhѩryxϲR BpXs{d`&%#;#E98 -@#m+`qֹ?6omqMclr`CLXzjH"IO$=mKSϔ.*K`D% )*τ`cT nP' s$m%Vү-m処 燦'Gnt_ni1(Q{j64V*p'篲#Z)t4Y٭rv]_&w J8~;$IEvX+2y]{ LD 8\#ih.ǔ2> kC $䍋hLr'uSyD˃]#ź}Y֭ y"#S>TsӺd`ry#:kX1o+m~DZ>$䍋h[TwA~ntϷ'7CxinM|u|wgti)DeNkyӾk@F- o-<}VT]>L dOTn?__g/͇zPyuUdj{`~0N[iJQ7&a $ $z,PZF [`a㛩%Ey`*T[X?sςBb#A"Ԫ񛿙Fۂ 9ѣ넧 1DS p*Yg5Eqs?NXJ( i W`<\Ǯ/8J(Cʫ`*E1vf)L1vH# PoQG,b)MŐ0$<%@;w X݀CA?Жv7**j8H4%\ I)[[ؠn eu]|M:xtngc!s!^c cy28b4aEkr JÜl35&:5ԳpBCמ1@GZ,n)Fܞv$$Sk|MA 3aNB.qcJ0I'*4AXq"fS@cdD)&QX^K)Lc$&D o3w*8DuytpB&'*Z] o%#*Yc>݆eUR)94t'ArL۝"rbF"^M?[+ccGui+X]E&˦ڱӭu[wӘe劣e+ ,La؉oҎUB۹A4;^(& ERh$jltIDDQǣd?FshQB {=>d3!ڥFs!'!  *^zOiqSZY0B `霦Xp 2TIkҴzWT-7i'bϓQN&PPYwj̩y9M2WjRMEE9f.2bckj4 zfrNeqaC!U\T\TR-Z6-Z ߢ-EDZ Cw;/WL%]+y@0 QLQc_X9\$ޠBYnˮ" Iؑ^ g3^<=PC\K`*.9b%2JЄs1ֿMBuu(\B9H `cw0QlmBzu)snNyF `9, d!fvἕj^6=zvљBv7pͽ9,oW;43қo,@IŤjP]aPeL0Df'Z ~\Dc.Ag |Ͱ0,`o+ӑoת

$䍋hL1>̷iDk`ֈ-1Fu1t1j0:(Һ!!o\DdD F*2Nj+b7SOegu_`~ўzwYL e3ahqQ G0ddP1B#(%lb;pg]O~ Ue2u7wxϷ'7C\qK,=;0˥Odtvw)A;Ԭ0zf.j=:c9 SeWd ^̞ &-6y`@Ҋ˳!a@ҝ-pr0ZXf՘zf'Fv;i2cHZ 8')"*ƩS>>~R>G@&t]L@K^V"Z )rAwMն:{ .CJd#}#a\VhZI޽^fą%F>/|0,{Wk=_& kU{E2ԅ ڀ*(>tR>gHE )+XDRΠ%kH&/T\]fQ ۽:K2e **^jVϿ~9H5fYz3cOCO.u'Np/Ar`lA!͋ʨKy.n_ 0̢V5[i@[i©/ȑT JΗƜ\-SUhvM~XN4xzD "zh {Mǻ^Lnr1R)0GީZךyW;x85zB7f=A{u;%+)m0^s$qRHg \<1[yoY9-> |$ü~.9S{I %b.uMvZԣ l )R-B8W1Gv5ENNtjL㥹9 IZ/˝.ˉ(wt; (bv́ɍ!8DmUݽ'E*qxux2"DjiA)I ?ĉBO>ڳ _>/_(I\ T_ș;)5ɮu!.6]ײ"AJ>1I :S i829bDŽ;*I+HW0rVƀWoB\ ,y ܘ 7# ^RcZcZ( OJ|4*]!z7?Ts}oTE7M]rUw}igXx2θz/ܑjM4>| sI1yMo`h )X^p诖W%%ӿﺒ/-GCx]hk^6Fy8[jv?z#'t.8 iy9gܲꖉ,AHqYE٧/uwX0 /u;qg C -v5;c#CaT B8p l}"޿ysw^ $NeaإAl)cjyCj<.=F` G`re}P ,Uu㍿Ahm,xB8f {֗CsS%#K>xxE^idx]؛D q7r&ǘZ`XhĦ)W C*T2j iaU;xS8q@gy!;ӳo7B'ٻ/$m.XyRPMo̧ ?< f>^mOUp j'㧓?Ia1$g rxzx̳rZ{gzO;(d2Er"rJh>0bbbw,*pm@L'2C*1/8cEo -vII#/d(d 1}X`գTK-mdٿÞPvͭ nd.dOn5/cmH~{ddX,%*%Kc{0 Yڭy+a4K U9kF8L`qDv7X* Jɚ \ڂ@#"^KH'y=>3oWetc7UsN(A%8mqx"4PIybzD1?p%.y΢Sۘ6A8oK?Ŷnt0鬷ZeG 2anPEfOrxr~v 8X n<~|B!wc̑[Qnkx%ޜWM\r>5`~<:\R$ KKbk&(%"'(c+F^̡81^D-`hW:t+@b:p^U&q \W-Ô^az|(l{X4ELb]X -N-5Ƞl+DD^@xejVILDTZׂFcHClC/A)d,FcFNSeQ_EH5+k3ł b "%Bb6R;PM,;ҟF+h7h5N&aTI0>E 2c9W_s.K.WaTV9@}ч|f;㍝LөRgD6Ƙ#NXc_Z4{+q4bR7i7F7&jookVۮY#Myh-">eSՅ'ɥĖ̡_5:}y45Gsu6Ůܱz N_ $R y~}Udl_ddF,f\K]9k_9Թc౵W6ehtm 5"4ı fm&ؙ$ru%JmN a KB, _y0ּ>`N9RJJ)Ek ʰ$|聺H%l x$Jji>9p|VA`eU#TvﴑS)NU;Tؔ&NxMQN[1BX53@3 nTkXn-{\4=oo5 pB2 \@.GAu.wǛ/:8)xw0g#?$pNAҞKE{F cOC@AFFhdJ\0"C) jX$MJ4\$JcA&$kwIYK'Nة:Ps01q'6*?wB6P?&٘a ?9}˓>e֒ڰeQQc{4es6*QP}|zWxHVs멽 +%5]khb+TM,k ޖv?a ǭNtk HyXӶ"Op~gh}r{&S~D~3Y?M+Jc\| 9;a0 M7g{\ྱYLSVFL}BL417)Vt^"/ݟhlW MIlncXXml BՒجk-#&GcgIA%*B6'EJVs%R?ϕD!E/]BıJؔ(%I57aF%s+J0^GYQUՁ*y^)ղ^Cnj=G|Uh3a26+04:yi8b`-ZBTs.~rzqQWCcAmsAfr-6Jjڂre/+Hn|Pً̍D F[@rWq-,ـ͸xiђpp~B &JeIib1h^}I͇x%(U!zESZbĈIݠx'Wq~— ^I?|><-ΰF@(aBX%A[&c9Kp\ Ѹ`ƚC|_zL{9R$6_>P 򩏺-41,VօZ~:*AEryU*Az%(Fv%(E)f JP2Pgk^+?kSMxS*EۆeQe! '--T I%zuHDֈ6q**6:ju.5Di5m*fu|uciwo"\ۄ&6{MF:9f6#}uB֋ܺk P$m3Um^ޮ"/cܺ~ouņkAis $b.ADٵ1cnE7UEh]*ҜYч f |@ Z sq?'i%Pu;mzSljVHYwUKQĵx%aބpp*hd,1t*{:ރ*pl>E}`ĐR vbLbmGGgf[ڨIx45+TRKӽ@Y)JIkkb6M|;ZY;qq ԚcbhW%dԿXVNz,MF*& Ir]57"Y0{)ǔQ=EҴR_ȫHZC)w#i*=ZSSZTJiUUR^aֵ#&h`t!kq?vg$?($NrVwk^1X1|ƭwp9̲'v/Un}-? hqzKzgxnp:XM$oK U [`ǽsaH!ANND"ARRϬQ#ύez3@0kY ֒:&xeSyP0r!$0:gih, 3)Jf:JMRR 3zRenœ1eRw^n$FXn$F|"[@ǭe=A!T1ZSQ Z8o{05f=m<1,Arb^$_+')J]|%F)L9hDzmA0=eM ZZX:P%३ Jy-{er "^3:cH g^r bIc/-h&8F)Px`yV]XЏUXH, SyՁ*QyS9*/U]y@R\ˤ (.T6WB p'ps7׹.pgӊ |{q$pzkڨR)7XAcof u:@Y0JUM6X 2{w/VŊ޽|EV;ǐЄtNH &5Dä́ Brej bкw8PV/r.i.trKe迫 [%*V#B@{0 ;Q`dRK6[[p'u /KE#sEBGJ0:3F? ng t!$ɴiSm?%PG@O ) Ẅ́a* *ni?n4_wpی.g}vcٛ_m `/?puݷg;/_ڝCf?wfw{o}痻?ff0; Arܝe^ǓKwz\^tse%vvd% %Ƥ/їxe| Pw w Cb8ʻy>0Ha񧩽eyu+ `0p;=39tnF?;O79YxTcfw0:953k'ϧ@LH} '.^deLt͛_ä3G1eՋNƓH x_263y}{a$糷~1OfAgq7 O&Wg,H~~ ,wpa!HÓkjz6 קh~ud\X|kli[:kx Bvwx>,p otE; N@}:!RwN ano|nN|^?.yX{mt_^#Azv_g_fnW',42(}6|`w`ʵ5ٳ~f|u'ǧp]ڟg#us_&@3}=tsif`uȤ~oSU H"#8;!Jk8=g2qb?-:u 8љ?<'%fzY$GL)"nwo>O:@0klU{@ {fXSUy VNTz G3/2Pr5meNfǙ_[`kk͹LiyZ(_WWTfolFCm:7Rh$\w)\в TFZܕTFBkPb=_kߨo]tSSl2 ?]X}#<.3%R]~.VH"D$Zy h:R )S*Hn֍=>ucn֍պZ7XeToځ?A%JJ[^+4UMd2;?V)ʑUYRkɚ89_ڛJ29K.֟*8$ QxdT# &챔clpi޵m4"CqN:{'pcǨv/WF%%8&Eɒ+"(*$gwvb)N{ w0GJ꘵Q¼CA!„(jXjpʴ"U J'omVFkX\ougY,?xV,?X@`b~UA`$PHHcp-Zj<ĥw ղ/>4rbE$#WrA91Q!([p$#d" =xΝS;a,Yqc012sGDP>k yWiH C{"^ +9~>|rvMGajas'\plc4 )B,G͵ٽV=#cvߘޯi 'M͐4S> M)OST?G3Do:}- R=OVvs'}Fx3yjQU#3".LB34A{u1B>r0obWݬNsN(?]9;41dτбlgګA :#WiXn$I%RPRLgZ4F$p4M!7Y_mCKF,AQx  < hFV@pJ8 c,6۴?x,6o## bQ88qpK2-.p:J !Mp)+@>&pTf_EdN4S3x A'OFUGP,=n{=r%j-hQVL㕔x^Cg^O 21J$A[W+gӠSi13+&0%6puL2JN92Lb[EirJq8MS4"T'^S<ݪ:VS$&*I*Ih&uNABp,D5)UuDuUFu١jWaKM[p^ 0X 0bRԨ#PQߋ֡f Z.CPf]T.%股%etf ۅP[(;oXCtD{r:2fm_,P@~/Vf-ɶm_'?0i'!ׂkMS;mܾ(KW xBQqqh Ek'HyM3f\aMs_() u@.&PQLWA˧H$ݽ/)QHzMܸȕԺy` vQBg)ęCRy ix(hVa3 ,VX*l7ׄn*,Ùt6X v"BYֱ.;=KqŒzV[9`BP|?rZmq'S_[sm(3mِ I^#s.#sܘoWO5)nE8F>_mB2(P.]ȣ1fs]M|_N̅nhZh6e% x,f qXg*K<ؚ`WAMD˱OumeW Grñ0^]M]lzsw^'YqN m B[\_{Ke1mXr_r' <}\j؜[3ZH*™84WٍՓ[YrG !(XYBO>&T @rW*)X 'stHJK~bu~Y$\uBFX{"|\ɼBD SZd2&x)-od?@yC$q&⎤ZN4i-RkcO 2dpNJ.P&?>7',Oe6z[ %R(׽ϵ*").I0@~߳|s{Et/N7VF3Ɗ+=(_d,r(RaSPw,lr^ב=5gԤ 盝Z[gXc5CɮF5Utzjr^hSP5&nVk dfZ훋P%UeVRLS1#ɟڇna+%+e|BNdxBLxѽ?^`$8Pn%IN"#pb;W>1`j/#N#Axn/N/#Ʉ㉻>3eR:j~;x6ߣ( {Noۿ߶yw7t>z:'gh}oקg'WF FNn:.oGLFBZp|:]IHcB~\xe| p“!Iy}a:6Fd'2|4{{=%/ǎ£/NolvnϜwI$8tnøRyQfzLf(-`?^Ԏ:ip_ݐa~?|&n_^-Ÿ\Ґ0};Bԉצg0nݾ={Gxw-}}Թr˂朎jݫnzVx"Bq3y1wٳ~woz`F.0u?o뺗=. %}{qz &;c{ Fg_u?09ʕ?p\{Mw\Bߣ'lЉw>d1;WY\_eH~{o~(9-j=]iI|XBc;kAtZXR %6:ЧLd0H,ɂVc#0;aK}:BXɣA: &S4HA: il2s.sF&R{N21k4W3d֑hXg1j-Px-NkZ^Oy5I zlȶ.uܨgyꉅ qIK3)ќb@ w,\%ڪ.iQ/gD,GK~65fjsusI .LffWRF+|؞V oNFw7'b[B@3 P0 HE"eMJ"6fR7WjK!Y aczV );E8d#eB2):#V!*$ A,NSS(g?K(Şciu{!'m^CY&R#P>C9Og1ԥF .NEH @4"OiҡtmJOM:TդC5PM:ӡTqYD(ls KqbK Ӟ%BU) l"TUOQyI]*&% ]N*U2&U7nAsׄTKC-'%HsI1WS`o e6B.ZB YaR&̧f7Ƣ`AH)ǔDNiF:X5WX(@mjuo v@b/B1: Ǚ>&Ai.fA[-x$b\1%VR& ܚF]%PoƮ{#R;&c 8/lݧV W׭v_{dk(vLJ7B_6^S],F_hSLQ#`>DܑC3 [,qȾOWS]}z|8B!yޏj-a+55mƹsw=|Dr ӓ18i4b^B#6lA+F[yR\ C 6G1fAฏ#O)F?|o#>q F40Ɋ9@ڄ3~ZS@#Rc*"3w[.ճ?^)`^7?wUD 7ՋW;I%Pm\i?u hNէht\|Ijl؋u͑\f@Jex ۸4q4Ws Ĕ.n_<˷onҏzܕ!7LLTt+v!Y b \p {fg2'<dCh"1Ib8Q˘3 sTWcv*a$6Pk 9yϡx}ɉ~ PHIkC,l,ʅ^/q }<2!ާy"!DBqЖ ēP3eଡ/_`}B; g\>F̔s flX2?ޞ$2֫ۛ%Eߦi^e[`rgG}iֶ}x^mcg&l;fY/{z7ܚ=̃ܽ~` ~|T+J~>㳳/.޼Ts!#0^4w}?~{uz>Fi9{l6W6l;֦*8eWLxQ?0I $yӏ;ӽik+DeЂdsNI]ë`PțPq.Hb4QbMU!7eOb*XQ+Y5ݡ& \K/E/[4Kj$>ҫv/@)d3$(!sB!)ѥ@ q)qݹ FVP0KI¶ B-B{W:rm|wjA@&WcR2PyNIW"/qH/A?XU5W2RcRKK,[+NئC5D 7} YdLz߼dP zWn.UҶ)g cpFĬ@зFF'ށ͢j5YyJ.If4Mp5sMK|qUƺ]Ҥ?b*j|Bk`5~m̊?䂻/_7O%׽̊D;GE{|B'K7Z~ֽq^\n/oP?]W%]@dgޝ 9S;xvU9$c#yk;LD,\1ː \n\@;}ix眚!NydyUHJ4/?; {OvG}bD<}eaxק?E%4ӤN;M:J^TbJ✬uRRGT ])Ki)9*AddjC^CT.Iq+SoE!҈_(%5 Ky.9i6JYtf®tD`hle;vkeWն)d uT;[ڷ޹:Tَw<ƌ({/,WAq_IadvOW˚`"qswx/7\lX^{/ɏWW57gp&6~/Z ru'kӿ->xx~y5X{sJ>?\W%Ka|NJAO7^͚\$J R  b  QV/Bz&)1j5G~-0Zå-#M%츑9uwrQ pp \޼|pRn+ߐ|yv1=WDYR\L|-snjB)F(}*>gM熞B\m]0)8{49ᓕ슾nEH (ԏjO~߅<7\/=?Ȅ:,# ֽ݅t1Ml,F&aHS$IfI&Q#M}ۡVJ絯"f; "Qp<" Nl>KۭJ?DFMKS2(< &LmT&Sq8G6m8V̌؎Y/7k_rDebw4󱈐G>nt.B+63ZHd0 4Ւ<@_' Pɢ!s3_e7e|3ץչud<Ƌ,2xh9oA9xC9N61ӹfuBkjuO|m ~?.!?M21 z*i`\00<5y"hE΋v6;؜ y^69 W&ך ](ݱ&EIkR_Y5X:^XT.c 7G,ī*q< d90BW ̽ⶨJZZynCS†REj$&cAE'ڰ)7ai=Eeѱ(T :6PZH HiEh],7 6i@\*G-c"b(`;{$ UFI{ngljeǺB{o>i4[ y *^\=u;\xԇ}1ɻVՙRe<˅JUَw9&՜lb^혔c䱯I>e '=|ъ,=̺\_D+]I=iu@g[-Pc>Xڡ:F"`.QhfT"VeNNa {^dHhNYg t #m{tT'EIQԗiGdO;S:_;83xRV2s~\}x; 63plYpl b= Xu50.v50-D /mWYZ,E0ZBfrh1En$9+}*_,TpNl`ʹ5u@33*^'{KZ`߄R"΋5oZ1LuW4Twᠹ8Y^5jaR&Qxh)s@Mf26R U4Aut= w5 CCxZ'>催tG-7)XZ rpV e.A?v˘!z4=xC F-zn‘DhRH٥&;H\}w;Edq`SZ@ջ ,JԦ,\̮D04yLx7Yswݽ7O}Sf{n_6"Z{A/&_D<\o/=|~) =/oSɼ>QHW?@V16ٵϊ&+`#{5y?=o9=''dځh-3Qd)+W-p[F^Ngww9_0$1D&+D 6|έf(;w[z OsZD" byY4: pPM!!FةmimTNr+VT,/JHDY$-Fn`s#ʻ?Zǟ#AVZKFElֻS1I ϽRSG1ZRJJS4 R`\9Ш(~7.&u15hDFSHo]a[d7h y) NͰӠhwI7/s$0Ik Ev$֦X"@9iG2#LZ5VX36K'? VkNO*ɦ+sSpÅ BFUl(]p .2",2hRPFW˜ԆXVE]z%KeE(,5ԎtS5~57zƔ2z=8V󩱆5njS~i<\"˼-P:"!)^ kxVXfmq"U-Or jNI*ǜȬS*vp%hҘUkZ2U-NԂn^ j"OE19p؎.*$ {M.i,2=|?jXP}'uUr ִHJ8l_=BPi*C'5 !n_8D= a FH5#ICqtPGiJYvd.9귎c2R{t;,tM%'zTx0Xa^"nw[_cnXI8y2 _z#␉-_ uxn (ãs73//2_j!*~zDepnQhqsy"1w|ae4Kq(upԾpA?y`{lty0o◎J{NrX_zIYnIԇpJ |61N^}52P0ؾGU+>Uon͑Lre3!-,wLn;w%rm4"A/qi/ ^qR;'\?ޅ:(I!c=Y١eiQ$˓I^xvLl7Ae9fښNɠ2ʈӷ!ZZRV[UV }LhG% Gg%QJDՇ ;~B I,0%ICh euhX7`[]$MpwtuGIv;c0l [AiR{ \x)n@>tY =V3Q'!R0eL(2+ Wcd/d JUT\CP`•1)UI+W*/S$v$ZdЊWϋ'Nq߰+9@/wQmhRBCĭa\ՙ%Fg<(^VWq %Ӫ^xiWV:4(t( oKR8>wUZSFXmma57(a{[mac޾[&44x] 5`+gLOVC^Aih\RY ǣ\FjzѼk:R2A^6MGn@VG>nFE hԋ8VZmږɶ"gʚ_aegb$܇"a8&ĴyW*$)QMR_T7$l%Q2? Hd~p|9 cK!ǢCp gkEöI8z~X?'1rBvozj"FJQ?80_ *(w`Ҧ! @,z-~tcb.jJT9>_p߹pC.;Ç7.p*.úadw;N?:U&Wt\ڏ7!\@kQ0$isɶv: 57[ݟ*i ǰ6ė@r)BjV_i~]VHM酹_`}_O0SfPije&jb߉s&]vGqb<,7K&ûr@UbYfW7 f>\iW[=C%6~ zJ\렸X\)71.C3.e3KbM])k{1woQ?ތbRưwS1̤F 9)hNI bP؆f@%o]1bDJ2.wV,/Fk7=ݗ/{Ż5Ջ] լB3NgYKS9 %#Hntyd2H`r5Riڟ<|NzKb\Y׶!,sᙄ`chc ]2<؇ʛ>N[ $a _Dkquaf4PwD?M8 /C J65> A0@,<ܔy*/wƍ XfϏ9j LQÛLLF1EwIf3v$LN(hdbcʙIVءjCG:&@-L`[C;_G^eg߅ԓyPOn-|0ѝ!dP*>A8#FFzFlC'=謌I1BK__H6o0Pb[ѡ, =;M PTWbTqZhaE'pL]s ]Z&k,!  ORTBĉN2Ɍ=Lr/R_~8W^c;L3?N V%C8vc`0-4SR,Eb`fܩJ( vwXQ鴘, O8Pm$;@$:=8}& }+<3[=#+8w} G$ Hz[ `j٬@r(X Dj X(.գG{:,ܡydȠ/p0f#8_}I:Eq&>yIKP1 )AE&(@wL"To]|N0]90tV\&0)s=o ݮev\}UOvbV*=~?K\ԣ2s?b̋7dA .de>,0lA鿕EmQ[-5Äs%(PM%vfT*P*R Aca2jB%hd'WԅlOyշ״lj<{Eav){&eɽtڷb_7|9)k+O8%@|Z>%wj`w^8iѺ}*rv?MM,!Ay CqKx;QQj}scesj2~i ^pRku)=`d Aln6d{ZO>vՊ؀'Db$cD+ISlcNoۧM/0a et Ncl6qy( Go[o vclV?^X|w/Xi }-@sYBZ,d?f[g[!@e(E8MPBnDhBTdg {uv x m&%AJX,z,˿׳,S3f=\ˋk_Q[/ޢ۟|Y//tq`׈yiBiF39"KA@ (0M3r ,X%6=̦!&ʴvBu*8 x7c h K!F7рC e8c@`[OǸn4h:)\A3]',oHI=1& $3}ڡsH0 8 Lx`p}AɂzBHԝM9C<"TpLaeh9C)ƌkh40 p^`RvbXABV =o;P07A c$d<0BH I󐕷sr_PW.t?ѫ\FGZAe +DkQ1_` 2`!_s{Z!$>޹;h$boHgv`FKOxہ*&sb-$)SmN@e:]烮{\t`mǃAJr|p9$(zJPE>mmKZʰ>Z ٥%N t;jv*m,:[)E)!D){)B`6 /QS_G{E!ȱ]+-TJ48zMFlrSq];ϊC_L,v`[D0ֈ_䱁 HH@z-ss+N 6Iڃm4VO (i;yV>10h~X)%]) i_XtԄB6=n5u "q_i*>@5[1S`[z CPhAVtkB 5DVoH@( )SϏ($:@`H*y侕I|N<c3j也h5Oj cYSIR|bAփ!:7N† )q$3Pa+iW$Jӧ7E&K nu@%EmsBʪq}LAbt$֨{EPC@5J}gĊ-sbg,z%h}K } CλP/0CqD+{ՂJȶҢh2~W ,?4|4Lf?mugcJCHCEUc-H޸Ċ۷)wtҕSʏ (-sM+vE//k~Eu~ἚCR,d%&UPF߀_> .GGH/y*+wAУN2 S{<ٟISʌU4y"$Gw c%P :T(|a Y$ Th ǎP,FCA1~|@b1~/GYX6BU þQ[9m"ƞȤPp0}e#\?RDKS-21HyL44H6?ݧКKhMG D; D0#%# Zʿ'ZzXp1}*932@p1jQx`8b##v |NA}bDO&{6d@H}0Oqx#rq 5&Iszn!i(V驮{ƻ?6$lTݔ EzkAؘ ʝGbz')Z1`މd_@%.|vD"1l?,n/Pxc vth)Q xN9:Q{ 3e/\`ɱ05A,-ʩXTnJbO_>b,E )&/Z?L"!*5Kzqu1.n,5` 9m f d&B5k.77ED9QKL s0#%2b" O5sC5Hr K\sQ %Db+Jb~}]hGVi'ίd1q5nDžej9t\Oh\D"-U ;W! F S & 0ɸChp5ssC }hM*`fy' ܔhٶ/je@ѸY0`!,u(3ƫΨu/_\sa+UTVen[;RIi|9 R g"i刣 CHJA=WRkE PZq"V깫IJI!8=M,.ZelZ wsBAYSACx< ֫%Bs98>XSw^İ8w .n2Z3]S~:E3]/ǽ-QZwX49`9N:A%+К'\9# 4 :ҿ@/P "D?>2ؒ{]qLwԾWkm}9M&6]}<ؙݠSk/nq!sDnb5b5-{JSZ̸HLS6pgJX}< Y<G({}oa"[dŽ>hec q/G,P? ̞﯎A2|21sQU,T@0x'5ٽU\|n(Ac%FoaJYp˶$HWw3p˹MlŬ =vcX9JÉ:܆f\ H5bN2!=U;d'峑^q! f܏BK~Zn*&hF&m-DamiĚŝ<57a8c&х(|ȎǦBd# ׮?*IRmޏb4"; |nlҮuwN @_4v-7c3`C,/ 6蚟qJ\3rNA˫]TjT88LnbևoiMfۦ((p1Y:<)Zu{3l8Alp7'/N.~8=}uo;WqبtK=>.⓯>|'/~?|e|X^w0\bh~E. VwM2wڞ{&۴i6v\^WW-^nS33jfI]65O&݆y\4ײ /@:^0A $3A42ӀQ*MB/q`qVo> \^7wn}R R]""-PKy}v@H*OSo"S7]U^vzM4އF{ P)YqW/}5!N BA-5\?&G${T>>'0֟'lΆoNot)&N__ g ?F*Fa"WO?<-αz6PUSTEޛ? /jۇ\)W@k$O5KXJٕ:I:FXmO$e4EiCwz);o@k{#ᐓ Zl{N0oӌZm4cV xlI3h&t!bXaX$X9qUYFWL \RlGA"Ҍ`ɞ,,;2ͮxN!2kN0m;.Y4& ͝mA=lɄhHBwu/qq&Wwt[^ۇhs#Lvc%0z%,CZ%' 'y, qC9"jg>hs1Rx S$iG6y &{a#8q,hM&QeFݝMЁa / = 7N{_(Yen.Ga5q?&K'uѣ̵/V˧=|g,좼fNʝ?e7^=eɋF!(շ&S};aP}BUŜN oT߆ԺAz 0'S%ٜ|H&&9|{tLOZ!C1?)5pwLV f gjt7‡$d+)qs\6RRk1KہWK_b{ .kaL0wwnY+:W k>,i;O'l~̠:铋geVIR ]2G6^H듳E3la,ɺfş "<"mL4|XO'?`mͽm+v͙m;ayz -Xu[؇<_Y.\i5on維a-w_lC߳￰dS&b#&Qzl^~s 8f*P W P띞^͘v3ao;G5Y;m;4) \iOGiBK;{ɻ7(s4UIVҳ?zu,LKF}~Iy],CRZQe={y9h> ]suEwKh'> з]29>oUt KA.<-#`*IJt!R+RyCpNO\"Hϐ* S{ʩG/#_FR>??????[2|ݰs ⮽v`YYT5 2BAư{ ^9] "'46c$a!Pt QrIyZ<7yWSC~ߝ qILAscyNrg<1N;קּj)L[ Bܔqu|R̷Y$i!פu` ChXCa%*-F`j%sǯ'd"`AJZ8g 쨎 Rhwol|[?Ĭ p>?ܹFA TP=\uZ}&v_6e:d$]IŚ4n$$ycbW\y*4ܐre<JI0Y&+25 JIoܲd޵2 A\`{~D<ҌY<jTZEDxsMUMq,6<k8HbbJx5I̞ 9ou򵯬y 7}=$&cļwys!=F!':YvIƌƭX5X N(lFas 8̂e6\ #U`*HHRzX^ Aϸη5」^rBYĕA;ȥ{%c΁ٷD$,3loLf (_X[LakEM<&ٞv,e XX4 @%[XR6gRa`-YxwJŅBb qTѸ ):4g1Y:WD;M%RBE1)X%A{% JBvʶITG/JܭUwMZ?vO}, LF{tck?0EAPr,(AQ n$lǣ2eH ɱXqj0"" ZOJ0j)r%G$3gZ. 0O;=UpX7qq|E5 Wf" *Mf`؄2bpbn@V)軈:2h𾈔Z/+Š H i İ qӎ f(n:9y9 |E+`j /k Pu\T]f\Zaul 59pιd+ *ףige"e_ɓ|Ι9(n(YD.Y>H)*VÏeQߐ%rŰMP2oX>!{QE |yg zCv:Y.$wI!Гz[/6WYcN'߹2LWki2v%d"7>H c|u5.vB*=IecBko=oBeDYj0 O+(X糹Y( Ppgy{=6*2J-2; Үb9ߜ?~ITvn/=ȡqx|c̐c:BS]ӲxxA)Pqc`K ]| Wʸܳ'7ycFEq ڏ r-wf|s&[Ntjy>t=YFT QŹmC-ˏY8Sr':U"W7׻8uQ_>PNW}f^F#M|-w띁;ydts2kd,_ \dnl-*ηʷ`Ĭ/{8j6!^xp=Gfd1Zf~wn^CCmJTz:[[Y59`\$#=66u("uhύKXv &Et ,dXv αa8DaE JONa0} ;ťKq G^oTsI QH$\11$:rme aiȬ@y bBS',HOWKG)0Dkwv<h[Kb`; d0B7|F)%GP@*̹b̛k?9Gctwn[}ZLX` GUJAjuS/%,STcm"^׋^vd.t*d\|(?'vl|_=ȅ])mӉBr܎m#Χb*D/]D.]LC O%~-M chKr(Tj#\ L;%ޫ8HPɟ Y?YD`?籜VP;8(o텻0(])"&D=tf^L d+MYʫ~7ōWT[^ٳʜ)"&dM)$vuMβ1Xw;䖳_b]^b>X{vrXムX2 2h1y"LNڇcVbT= )er,ۛ7~[7đj.A}{ryGIrcSˊr3]X]A-*i !!2 xOx,;^@S9vB.0 a,c. __BT9_b˙qwʹ[]90Vb/^{QgJAXBFI+ˁ\q!h\ιPZ!)d?=bݲۭR?|9͖o`es7vSg6'5.ZϫWp݈4W}j"kLhMji8=" N1)jL&I!(Q 4I:]U%pɊ%cRBU_O q4䀇N&F,djZ&cDV[ΐTΘ;7DpTI7cqETp:^MwXy Fg0XX,rJe5c13U^w݆Y}86 lt?QMQ6/EOAhBZeb@rĺ~npAYi3L"Dv/!a&{>Zf+e@0;lhW4U<ݻ>^F[&2s9Es|w<9zjP4`tQm5TB`K!G75(\HR q3%Q!1&MzZQ#Zɥ0#LiJ3dμ.]9FbE)%om6F \?sE~Ft4:j~+?ݝsQ 1DQo@Er*9Qx Oiy޺")9r̝SNbG;+֞4H.3aLTn֓w?7ȔWP1Ƙ&XpvNBLׁhN%#YdFHڵ{GkL\)Anj`2;AL.3krhSBoKLx'Lx m jAǏr&NAKELI섷H`}$SsMxHFJ,'#V}&1cN; JZNvB}GyNTԌMK 0c5#IS/o$Y$Bs( Mpo:MD H &mb ^c0+Hy =(z(dلɍ@zPN7%Esx7*E!ɻjkR&MHv;!;ΥYβgϼè, CO1*ƌ\Msme9@ !鳒_3@Xzy-\0e ӛ[EefQu'O{><MC0a/U/q;_6H-[" E@ erPYvԑU: A3 A$t)u y9V2>߄(鮨~˛K2#9Y7ocYȈ{|/1K%UV+w, G .mIz˙Yc]"ژ9!,7K$꣜X͊Fg˵|^ʯ-h2,hd:#晢HfX;:rH 6j2N/1A59hFN;67QONgZ}먑zecpmͫ-ۛb@Ӧ.beT LBHjШ*$AT4J#0I8%" _6ֲHL|rf tİ <;*~awD0*4젧|w6^ *(ꚣ ڿ=]*vXmO1%Ah5k 2& /#P;kcb}EGzgS= {o-CQ2Qs}n& cF#B\=u 0HKo[.C$CܗuJ;ɴq)Xs;N2X2ctSa0t#30bڻ`j]R,̶ҞOv.~WxQ g碥ͯ"qmtB?~Z츴O! UI4p/A N9 ÇDYp$p8?'R-cZے!(G\a0NJ;& -Tz4CWx+upWnm j@{aEIZ|% VL!4IlKX5&Ux?gPKP#˽N_v ]g7WڿpQwPY6'[V'/܋+wUo rsmo8oV+ ET2 Ilh$ұ ZeN,r>j6KRs%ƭy&7A'#(lqbjKj6rƕf Rf(]is#=*LMe q ,n#D"FFT({RUaJpa㥭rkLඁb>05-y iFi\kM+i#C8d qGޠnilh0&/Åʯkq–W/ ZLr\ϞE%!#L$"i,BJRPYLhg].E rIArc#wKd{%KW*yClp("V?^o|:NWtԂfı 22P$VHs#<@""HqTF^YU`9AvNjN$s9@ EHSv6pJwMk/u&w&넥u]9' hBF9&? ]4-S-8Jq p0}, ^ Gl6e$7k78UCkՃ\nGBIC`ěi-:t I^U񣢱M(vZp<ȉrVVOnn!SpӴ@nC$"nuAv w 4Ru=4'miM_fHK \lߘDpW̗r6뫰ye]PK W4(pasT3gMhF2Q \ƨmvg,?., ]64MʐQ Tu+ 7^?RHh( 1( PqXXpp$fBS'c챘k6V'26oKX29 5;veR1P 5I&6PE0M %D(:qZ?91 0soGuG}I2а|0{m*:vn{[!r0!sѢ)Dc18 ô!J'\c`J*AUdQE"R xv R !H/E~*,9Qbvn.o|{f$7!<^s;Mտ~O`lz;]\N9&XeG]ޓJ"@}sb:X"^Lb-&ľ)"{& Fsk"2(hfB$3\i% P{3LL 4. h rNr_n8ǘۛ2=Ughqg[\Oભ~>/iw>o04gx}^kX$rQ>\}8-4>dRYJXc!x_V;5eicb ъFF+^"=PBf'"MD<8fi# nQ `CJ@1?R4t:#״v C.dKIG@OEƉ.W:vgw (Uw\^7 wdt~0Y̦{qlb2r'cN^ :yt򪨓7$X RSL"/,cS#(tLzkϧwF &(_:лѧ<?Da=]ޚWoy+4eMʍ!%PeތǬQ1|/!CFw )ɨza~Y4E3.h Mb0)U:A2U&JH-k 'Dph(I:aP u"H|kOb-U[&_s:R5O҇Vx_ʼ@5WO|d}|"ɇPEssƄiL&/?|\lS:y)~gf߹[ [|P.BBӗG7L)e.py?ĿQ;^*)J9^k3ךrʍB鄽#Jja=k 9zml-!~ 2s^P-`N'K~yG0qxfGtj,8, Zgs$,mTF ŜL(Mo>8W{l{ϵ&IAkY'@؎eI j⭋;}[cd~["{,Q2H_F'pi)[֍ ݝܘ MK FJ^<׫<W#' /|>Ʈ^ŮrM04P}%Mi@]T~xXO.YA{%iW{O_ooя93;r2Mnf -DV{h@k($Q/ $~ALg:;ǰ_ۤԭcݎnڛK"qE_RФ: 5*$5H22Sw]=,+@QoO4NO@,{m7b+Qyt~y0BrrĒq@nrpUH.4O4_q7y7KFETkRwunVqm@kɇ伎,noѧ1¥1".\<ÒGA\6ܞ'GAD6}nqS %ɋL]+_t.E2Mr}~ޱs4+ӭ*fmwW7J`͂dl,cR*Iv׺[R,R9E4\ےҬ{~~15|rǒ`NK/!DEPr>_6^{h*T<,@Nwq0B6Vy0堐TB[ &<ʹuѫ+W@B&U כd4v$= y$)ʾe! k (+)8]h؟&r dô76"x Z3 BMgnH~I*Z- ݖ$HԅVμ= #{-'DH*Zdߝ?ExrhAM~1j):n὏+wQ u֯R—߼WI/fB`O :L;D\1>>gD~gp4N#A9 ,A_VlvRBk7{2=N[%SK.9gDooz#GLTM-CfKҍ"Cf%iP k>zE KyYÒ_ܜi&n7dy<묽 @q.[R g ѦgҗTn &\.0 \t"&OS]d܊ 1l'!XAK(JS-,8DK&% ݊ DH0kٶ.nUH""ń#KLDF7Z4%d`dPD0exXE)r"y)MѡV[j,R)h.>*˭EN2RU"TB^uW!*<[{!Oi9 Tq8HMT%j(<㮗j96D0!8^힊i*~okVE@(W579+/<`nems(!%Q z{Dr3S_a.FY'/JXAeSF3Q_<$/9 ]ArKN8_Ou/1DR_YnGLQIt7([%mWz)SSoQ`m_5kP;CKIE{sW9׵;\u펨c+笺QS2*Rٕ ibu])U'֮rk\'/C֑zlu@ ޫu\Tʠn*]#^"B?&+U!U@T9Ѓ\eNI)iЇlTs:wLI)&kjQ԰LhLNPXwZE9b!H]4ξogy>ڪy I H/,[?}w]EQ-f_L_?nUm*4 Wp~\>Yvav\p5[\sewT+gM*Xŭ@ۅ,KTQڻ(eZa{T ,l<6rF3.T8ǐV\=*XF!Sd{6+U*1T٧pnqӥy`$3a$8q-N"yB43tD{8}ԌAoX|p&["Q&GIdzS-,bUߪ⡐L9#Awu%߰U()c՛J•UiV;=0eRЪ/noкꅃ\ %UXDr^]c˨a挟Q:wQ:-so/.?lj/M_uP`'s_ˠ(UwDN fڊ)iᯐ5y< bMغe^m?UWaZ^iyU7@ p(,sJ%)mX"$MHsz%`f,KC_x>#}78,EF&ڨjCF!a1Dxws0,f9Z^B_4~HuR嘅:dI Z. T2%efRFBbq uNb$LLKRRT+jb)jO-yC Ԃμ@S`2p̸&ƢPe#^ ͜V TB^kJzGz+񬄔0B4d% LpE(ZpJ^|{gImI3JjZ6gT^R%l_ ~͛ŒThAV+ZБ!U;)d,ZzB(Em׵h&+#Tk'Rq]5C\WB-t!/ɺTN vGW!רLu@wLĬQ)Zm5* ۍB5) Ifz,/jl+R;ճ0`O-sթv_&B2A :9Y\fAymA0BQG׭$WX+OijR"T5]XytGNqP4IJ63^:HRW 5ҕce?+#~EUR>BIU)]$gK!hq=Vgn;&Z7uNWHPK(,[N@EuAk & "2,Oŕ _5.SϴGL )9 1cKi:!aЪ%-qc^>2DiIY,g6gzB/}ȃTH҆FADMb82|Ǫ Y+h6IKN oqNE4rA+p!"t}!|&ps ೔Li$-:(d-1H9d^p5#^˾qa?V+h:*o}{ˑ`@kW- 9ػm$.W*q6ln$"AУm$yTkPDQ QؖiF۴Q]3gFhҕro߬D9c5*c*JpĀRv0N N.xk uEr75Bd1$j3\ֱGss&ILpCˆb£F%)%"aŔ FPddWbB -Tin]QY05J*(jB";"*}e_> g3Ue_v-8+nĞH3ފO[13DiFV[M*P*.qנpŹ, "5Pui\ 2p pNiCXЄ!8Cǭf:Z&H-Q 96w)lu"goeVnqfT ͧ< yD,2eMt[mh0r PI$ep_RVqh )'1ۼ5ߟMC6+N@.=uQ:{ոk] vnх=WZa۰C6=^')_IG**o*I[P+ =Ho[wDhm 5M`E, .9ō΋Ц8E"*L\m:LYR22xw Ahqy,CU(^Kk4Ty_ !1 `U 1Au)٩GDjJI4aM94ӄr-8fk@=cI i v.lݗdu!0ȍtwǷ*T4mnNCiGTELlQG %y;: y}NOYktW@oeMWl>Tռ~bY9̍Yhzx_̼řKq 2޽aA\}.jBS xGLV`:sVf^~u~Qt"rVlrŌˀԲF=tzO.ʻɨQO)6VeF=F=e8$) z}m6@ PǺ@䩞"Dϼ+sJ4svy%B;v3$d-\B3mqx6}`7D+MEB}&!>3! ~v`9xC8{HKR1gC [}@DX@jba.Y-p]e-g7yˠUe|Xdu:_T[E'G*ҡ+0./UevGU[8sJԺPl?wi!dk]`ztrźԗӎ.B Tu/an`0DͫmbdXp=^#2ܢ4npVcqVce%lax^>cPpǢӭr7uu,*QE Ҋ3/ɟEw[.{ڂBe/4<݁GI,R+t$:GD??|o.\%ZdwǢtԈ:ߣ|%qh~Uȉ!0eh݁a"f/03||CS.&N[Ա-L8M W* k[bPŕzeO}ņ@@qtkv}HZ16ˆx_k<\ar@ OzGj:/ue{W8߻fF٧gǓĆ+'?BQx8خi$(/ƣC@)/ }wנQW 8 Cb5-g%:s7P6&08@ؐ)"h `% _~֚W" ]smʟ~^q,OEɶ[jRfE.}Pi+S&)™T/ M~\b*`yTT/M'@h1Or. W"o><qBQoďS䰕rz Mh)J<Ty(xᇶy7Bߨ ϨTt'ͨt~jaڳ͹`/cPOZ3Qj!#R+1yɴV;Ga8Αe8 xSLDfPT][@PuQ҄3H$LfH$X;!.:G_1(xyמ%PZlMo`4/݇hn[8)lE* Jf!Wl8CΉfd nN?p%WBkPN%C, Q‚fB2Gk 7@2Y1$faݬ]QbF`Ի%z f'kϝrajg㈛ӖƔXF)>z?9o{ďj"cX.V :͝868§ZI-7F1FD+/!'e rHlbB/ll_i"ۉ`~ֺ+/(qh I=x:4 #x]m9ȥ퍷d<–Xá-dҒGqsS/Fѕo'b)K$8M'p2!kL⃕ջޕҥ6'w hb 0 M?m(oW*)fN(F"ZpYLr}1HNL!IED.a⎳[Q]D'n]qC`H$wL[?TRFϻmg,S-YӉ]#?oua"W# OyA͚5'͈z@ lt>A:;v+(G@4Tfnb\غ}.`.ds&/q'Lub,s X ^F7Lg;NI 6V=/)ZޜWPG$IK%ڵT7hI_ZJζZ0{幋#g1Q.E3fd߾J}ױ`v^Gf_5FM(ll5U]]լׅ܅rbg(+a]u G$NlTT.8w~>}R)X!@r|?Ѩ x Aue3(m)#lG,]o}I}[,l Z;$l !u.S%4I(9XY5B){$!A#82sll?TdA删R%iSu [D`"D(EBP V&! &}jQeÝn#;Oeo@sXOeGbIB.ȼ f;A3@.0a& dW ?)m0f9ue"biQM.kdDj N%F(xKo"|~5 <ӌ~0oxCeHDhDf\x5(ǡ}b_̉Γyz0Q%1wLDYAC!1Rq,E"axQP#H\{Lp{a5a~x R|/j׫$[Z鑽p\wf _ګq8t\$;l/){>[@uV[9 sy33KU(\qQye9INj? %cgLR\ B3pfpкP5 aDȖыSFfkĹ;%)>§]| {фurLa7-eoG0r.H"tZۏRBJB|O fZkYgGl"g tjb;|ϻ;;켻8f=^yq(|_ak C&FƑ"6jHXXJ֑4 ǚ/v<3џ3-K۶g[A,= N˿RMߛ$śټC7f-8DP`:aS5fqۜvPN1bA+Iv@fqv飵EU r"҄"NmϤAFìpP) a8TRraY D4mĽ *.6 ILEsX ~t{bMv0z2؇OxQ2q)CKnPlL(Ss"j歪Eѝ-^ϋݹ4Un -F])n1ewrJۭxQ\1oݺ(o'e8*i`d~,?1&N!%'šuf76Zϵ Z6-)oJ-M[f!bM{w6G`sKð ɺ,`Q-, v΄e,-.nEsY,^&W}F[.Fw=ѕGt(:Ph1La5 5Y9jwd)4B`jeje Dc%ggJV۠O|pP,0Rڽ.oV@e!%Kfzwo![v~Tp&Y&T!صZ 496`}GgRL|9MVb3U@ rlt&f2=|)rr K&d=Ay;ΰ`(љS⏋$30 ơ#MWN4_ g0ӏa1"5OR-^W"8)&Rhu>D*&SMUsjZe0> u{MU3D(e22Q3I<{h&83htn:?ϭVD& {8zDXt-12u}ڒ]r;<Զ$̦T ";qrm5fkPVu7UηWʥ'^yqnMJ2ooh7>Vѩ2x=VvCB"ZI 멾nxDVѩ2툯luޚvꐐV)OnP昞iE (oxnիz5H_\D)!碽.GF,c<Gc@@dhFeK{Scx?F^zvÉ@:N;3z@]6uQٳ*JhK@_-g`ټϟ>~:'Vxy1EBUwN3_Xo =;} ìa+)ij џIЛ=gբSه\LkD[,O!辜dv_^1[ۗ)}w7 M'}閿෯o[gvrcֺ=kO CÌu_}CdlȳxF24} $\DW<|u~~'t:A2|7d';ǝ`E1B t<&G]E9E &ӫh難i$jXH:+n;S c@.oX)DM#{T2C# b@&InLJ"{(G2pW?Lv;>:zCcK-* "fKSk'do`ºr%@#hJl=%q=Hubp@oa>gs}O)d0,k+lUhG狢Mc@,7L1]B8XuŴ2\[,y_33|Mran[/HO~d.慝C: F?~ ]jw9whc׿&'d gÏp?uOÿn_xr7߼z:;g0#dp?=;7~u㧷/{'W&t+dbl[ErL^$%67?ɽzv}NemP9MGp?<묑 /IfC5la͆=8ʄ4hʑ9(3 {_0^0Z#%[h&x(-ηi3c] 4} g ΛJv㠎?$=h i47Xv`l{Yy/]Gu'q.mAlq!+^yi꫹!"M%ޘ}2̞ohAN|d>{6ynXXh_??T;KU2d߭]A2H ߾>`{uѼ$Afߙ(ًdvlI= ?EFWt1v Xrd0}Z,uf v8 ȅSo$h)-RC;;'ѤxSE??߂)2ͯOd H|O3MJ~ rF5?.\_3߫(iyx=[ew.liVS3?ydn?.{[f!xjl7@YPUnşF1<EhHm;dTʳ`քK}s5zg!HW0t'`btL'M8d\JTBHǷAlkV±&hL0:EA WSpT|Ua6Ru}YH$Is,NVzOr;…B"e˽cbjF(纇0؈T.~t 'S 1&*6Rd9S&Q_KV*T5"E {q%yBȋѦ/i5+Le3mU ڊO3C5;*H5 jfn;Vi6U {=y !PGAO7|3Rax'a׈y! `TNH }J@=A`m~4d_mآ8FJ$Ʋbޟ]0ln+cT-#Ttbc>|$p >>D#uJF| _9\x;j!"{iW2QŜ2,XwfJ|lkUq)! W\KIT4Wx %a6\]pJ qzR؜Tc54\{U65Z_M Xs\qՠj<',5|PcιRBƖf9ܰeCku;7=[U'/Tm0!Yan~aYsw$6KIņ@JH_[=oϷ*kaj4 0w 候͵F2'RJR!Ԥij˂9EQY)!薋!ݐnaq007a7Xyѷn !o!R ,SqiLέ̹p)m-D 3&ߦQDH1?w0֝;a%@1‡v"weSy,iH)%.#թL68P6RO'ɸkg4+4@~ h󞾱a8 \k3ڕ$5Ş0{nh^[O.Mm#SֳI}͌ ]"I8mVs Sݠ I]1b$@D߂rIHtsBRM([KTqeR!˅q}FDR-٥T9R%Me*2i"`)Rr20AK Nir02Z`PDZsEJN/%` MK1G ``n$3`Œ+CFI.Ġ }2ƚ^nLF) al:ԎZ \ iݽؘעFasℛH Ѿ/<@C\&~mIO/n͠> خ@T(%[hk@#ɖ;&pE_!ܥ`SH_i /{>,￶vLl& )ɾ5^!&I+ԯJE JPj')_KA5_(DZ7GM}7}YH"ewzO{Z!7L0^72*esNG(a\2 kWʸc {wEoy̯Aj,Ffa4FmF]#-^l-љq-@99YOˣ6kE؈rNx"R~wL o <ՁA,rz6oUJBvk&5h&EO>VRl D`t}cpHq|8u(!e!bSS̆</!_`h6]in f4N]4K3gjxZPIIZPf=CԷDAP3HtK}r |p(3xL콄17}ĹădeQJ\)vE84=OآuQD/ϋTHl"uYIO~ҭn~VQg(kG#E)$C_iem Y'[1z4nj)-l2UO N銌OpJיSwnYaeY7% y"%SD^'k7Vs3v DtbD RVٵvTڐw.2%`ÕSi@)f>*w5tGJu_U,O<ړg7nk~|7vP:2@(@a86:]pΟ@8ey=)qw>Q17΢.ԁW/l:z{vOsg8P/ Ǻ /*ዚŕ"%z#ԛ+7`{+,^n AFHH[:jvѱpգ/.u7@3SnIUS$) mdA_KG1'ә?,6mgwaGo0!R^F|#@1t<#>}p pSi2[cAl|xD!4Fl9ս]7U3. ;F 彲0~^֊{h]d WߝWkMz2ZՄ =Iwcfgw0R#Ɲ͝,$'ij-*5%]K:<&p7Sn6_Nj.|paLJѽELSY\7,[uBF8P7#0ނ >f޼y \w`=F,޵AsuW5!u 5 ڮ#UI>s Oz2HƈFۜn,VmU` LiDp5]#Y̫Hyכi%!'x WVQbvF:˥#y($xbX.e^B\+bt p!ʛ/g{h*VU1?&ysJ/^f, )q&Wx#R4UGF$62A,f+&g E U/(m2ÈI-I6f$S1B_ފ]UVwZw_ڰ,3u4F%jjD VS vTq /%ҔWOLnCԊgÓ;fhǽ/|c\lwe=n2Tb̽ Ɠ;/hp+[Iq~IIMI%UM$vKլs<β(k.5Hvى1 M;)ybsVpzspͫۊQjqTp,VJ7GEdz=Q^ b4?x@uW^Qد;:6&t0)2q)&$ߚ )"#JwKƵl纨 t$ü >VkY(-zQUpB8 #DT (?pJ!Ӟ UTM[\C5()Y+%ZCM VOeS*`Dəwn(B-/Γ)lszŐ'lV;t RkD4KIZ*PCX8 䙰E<5XsHR*vtRƥ-%:PDX̤ʱ 0*'3QiyzdA")ԫk;$(O%PRr' . /sY1ccyvPږCV mûw}~O>o3qT็U,okVKhQDwwrpD>wsn_25Ͽ|p ,Ow>E JQͺEY^hQ *PT\uƾլ2-x KId<,SCQ]FNSE*bcC߿n\dH =fWPYncOJɲW<*wXlM*orjV-L:',W($ksGLTGb=ꅾYTBJюID{ Dh2Ig4T :pjk&ea,] Fc[TS.S`>q57A9RK_3g6~~;q6H/<|3 /y]^dz߫WW*vZeϻ]eNe?\bЖ?Oͥ4>a/?a+ j6~PZMbFƅ%Y(*C `9Di*_*ͧנ~wE!Lʂ' b+/ Lx[EIUbk*B{?빜%|S4'9& $2VFCUʖ@8v/ITxy"4QMcG\g?.%P "NH+rOw }.3Ge\ϓn>s50/! ~o&}qrҺ?Ǚ]gO_\7}'ޭm8G[}C-Vߧhwߺ 6 떋AJ}G6v4*ԵuV4պC.ӧ$ڵnt@떋AJ}G6]産ެ[~[uk|]4OI8}8u 侣u2s*Eo-[TFλh>_(JiEu= p2RS/~;HS'' Ga#/ ǝU؁8&$2 EgpUnCupauvlufqv Jb}Y ]PxpOf eA,l}q90YA+<ہI@0$ՅjϷ0?#nQZ)PTlo~|9[R!> ީ[ү.~]䈍{ 8ofaf ̩p0?|^=̓Pޭfڂ55WW HJiW 73Syq"U+ J"qSZhEy Qʧ%E8|J -(%Shŀ-z6ү!9\RbyEFYM dg d kJM{~d[ b65Bҕo Ӣ&,n¢&,nEۦ/T2 ЕQ OuIHӒ) PP}>p x=y8V[Y&/z=ٸ:@vaB%dqLmvQ LqW l^O]dfϿͯnvIy]c1!RSwP%qB&VD+ۍ⥥sw!~l*1 *,wZ1B+$btPC-xTv=̦;U| ?2\_"eR'%F ؔ^Pl߈a(:+id5ed=YMi :9I{|}=ywHGdT1f)@v؉{rO;$\MPY煊`vMݽ5^Çz޳ 5#mR_,WCuI9]6梃]n:3*vf 1er r GdWEkPH(lTOZZ- Wou PζrSZ(=Ď&&$r $BSrbv@AX88mhhk#^bNo HYOAC.gt6 Dbc98PzYl) 5x ` QJt&q-Zrts%@@@ڱЮK Y\y;Ӷ#KRy, -rg<ڵȩpE݇o5¹߿G=}EW"pSX5689fvnI.,bWb4??fӻOUDbF^~{rQ~ }U`:Ǎwʑص^{ ^ wwwXU{< DXE{ķG}WkG[٨7.!qvP~_vobDQptY-ZC@hqBok#Ap(q#-H;=[r[~Ys }N*mעNEhsC\`9. XkH \_YܪW&,Ą UԅdIQbS7$srD~1)/8qغ5q(,3#%tNUHbM9t8ـdjU|d $+΂2 ޱKթ".u,drW:ɫ ID}䕸VuY!uvaK3F$j A=MI\9  Z/D) zO`$:c/MƄ_wJLPİ#"1ÈVz 2<oj&oj&^-F( CN1 "o2S +ViI[/ CĔpa$EYFr_޸p&S'v M0l7Ć-H P@L3^21Õo XP ,=\C%9B^&eܗ7v88nd2p([L2SSp~j"a"cQh(ĎdA<t}qm$(OcdG} yY,TQՔ#*J 3Y2D7|"@PXS*ࡵhK}~Lq+,rcXqa'kwmݏ;54D3Dpo$. wAD D9R uҨGz܃PLx;̔5UکFCf%~'eO}cr$.y}ay^L&L{p5%mɚ){K )u)QhFMaƆ¤L!tr53sj1Z!VQ dFOnaQIGfg `Q/Q=H!Bxs)$ʬmXH:8nj׈ Xq~N[?쬅 B狁UCHF|pw\w.ҟ/ l+ǁ/ھ)QD%["ٜ'fDk'Fo>7TJ~}1qKh3KE+uR[5D`!V $%ob U*(2׵@QMm*GMQ")Ģ$#c-ٺV_2Z0JY_0֟s[E&:|Uu:S~*؝2 ڭb̿#1 YstN . $ &s.ѪXG.R^LaLO~~ׁw|C><]@[3\Av7 \/3R-1~~#NN?M5yHЙ=L,D2"DҶ ]ft iJ&j)(.KڼD9~X,!'v\Xk$9Anoˤ6#Y`^~jB8f FjvВK^=Z@r! _4(F5r75:#(%BA^k!  @b`d;.~rZ(A@'յPkxs'^NeV$pxӶ[~D\OZ5M6DqhTA{# 2HZ GOU.@ٖ`8zMڤm-{b&JįD`]"$>w"ikKJ8ۡ6cDdpI!l9Ƹ3ȇ<!N @g7apiM\Ç઄8)Bt:B_LcsӸ~w2h+1Q(܍`\xw"+dyPʰIԔhBDqǦCk>Ä֣B{]v N !ìFDE0cs_LH\cbmðOkȹ4z2[ 0|'}pH8ĝLEb^j9գ(YHAQȴqGL!o AȜ6\͞*H{ޭʮU`d?Cmv$.$@cK#m &E%>&5ژA ))s*Rjjk ͼdg?{ȍl/{waX8`3Inf{Xz8,_%K-fA˭fTXdySưQ6R)nSAp4T#5PVcwz2/?Vkpe zoT|Y'T@/j%!p6V҇dMgc\)m/=$=#W+:Ek7{B !t{C3PZAn]^ !S\7XXn~ Fr>\.m?FY,Mj,Md~rc} z>E :3֊#$ J%&ia PTPIҘ `, W|Tp s=gew(둻y@F6ZYYL֚5ZfmlHkh7~5Sf LGT2v-cW2v-|iKޒ)"h W$aI`Il0Z1c8aę%6dɡlA/nbxI%&{C$8z'~aFV"V4Z c#RfJb#VQ|Q Lk3dJ FQBD('c #r`8U rʠFK&zf8Уn@X+"LER!4i@atjČXt5R he2vvH) (ŕɄELmbŸH@q k @ 1 @+hUbQͅcY,.2"p[Il9iGO+}1^Q ձfIF)lLcX[OENJBtOCF! 2rgGAF{m+G&7rt@⅙?ڀcl%v`Xao^x2!1nn?""{;;ݯ~|>I?]pXnq5ͅ-G\g3Uwwf>N2`d1ƻo"N.)#$zO8;&O%!؍-mc?" ,T5ű" iZ/!'VfC^ ev¡Ef[_AUXRN֊myڐ"Œ24*$K`8QItKR;je()~A썫 bt[aD:l*@bh-=>ֻdb$z^WV K>D U[suvZ sF1H:u $3;&'Ю$)dE[ZKUTQ} zwtkt$ SjtgW3fd'mĐS&eĈIqĔ$-^ͽ]OYe{,z>fWWFʉFjl|8X4F,bږ5ź>`9S>GiVaݶŧݜ ¶qs[9?2[-V B %=/G;=wS3@e5b(ͦT Xێ}|7K  Z |rB `+#ZQ5k{ F/'b0Ov0Z[p=6MDMGK0:u^nbbǣZX ƞ@K&9!GIu5I`uYR:zC[9~tvW d{>خUl!uσ{#"sC³P6<URϜuסVAٹ 55s6<:O@ lu(0O̧ Hz*3Qgp035R::% ty,1ɩ*}?!lx~.W9iDIcHy{/1l'M#挵S~>Mj5tEKEsN֓>psAE/^")]!|Q ,V6L[7~Iub}of7^>NWwG%]hLte QFۅ~by"vYt?KbmDj~s@!R-]jOnbTrN*%f»T:Bt>:ڜu+:s[m^3 ՠ]ތ1[~zj mֳ4 ns]y1߯uV}ZrסkdfGvX"9n%'?7+_ou}@ߧ?-5BvR—kI wvةr}_OMM4Ȧr\g(өL2zv<=~ϛ.㋫ =j ΚmFOn.uhhFoӁbH_JZXOongS?/; Ւ!PD:DD2*" :4("W@eƠ8bNT޽S96uqbS%g|Ƣq͜sQ3j^?u~ X84y;HIg"i{*:TK5姞z-"EYQeÅ14ǷQ[DuX7mcYqIyyy z YSÂnH 꼂 aXdChAǰS,A~,|(>Z48c@nY  _<k~qRv{v@VWփD@ܯe}bI*y,fsG\f'ɺΝBv;j:\<,拗<'K=mq=ߟ/n_ra1biRز7~jǮ~jZܾ'h ]*6W8ƣ"˟jcx>Ǝcx>y|2s]F$sY I`M[(%܂p#xY-sɹwS\.n0dwx]XlǃZ~zT$4}:uDjj::ǃs]:33aQkA SE*hS 6$ ;ҭVA@"QrB-g

F Ʋ=Q_Vq=ݦ( 4]Y|PVdc*B6N"֝{=` rSU롽 .~і9]gsFiNC2sQs%(y/zZ/K59s F{ -D@JF-*˔hO BGVICmQu"j=ѺvMw("R7HVR |KFA Q.mkaz[fb<ٶD*R8\ GO=r!XIEDEB)V$j ݢbBb}Ƣ 8]1W:4%yǚ:/Jt$)m2lQ/_ ;Wfy PC (@}]rnzLiڟ6V CHZ{2%jKtg4ԭSj~ n5'sUd*8]beKݞ]"U!d<}r. FqH#-g 办|՘ϹJjjK5g˜?5Bg>{qkܽ~~虒̳ЅiB@.+CZG'u H/?s@m~M75^j/q>J0#"QMOAoU_s0i) b֔g`D9Ą%)7>>b!!!CX/!(ɠ99-,9a\Mb0OX#QeCrMD|G W5GeA3'[Zʜte) q (D"3wS>L2u^ b 騸Q~7Fs|{$a.bu#;ͮb~& M"+Y -ւ3q.ϗ'0yXtShUͧȌ!%/t^J`c-襘rƆ2i륫G P)MnA,)$c:R¹D&:ɨbpD31Q! VɳtWM{{]xL1*ϗ:zG)Z-ԇ\Q>w}sZ'sɻ+)#1a{@󹛁q\a 6ᣜ.,,gJ@[ޒ'j/E]]I.}3?##<_]?7`%eJ?b`=LuY~{9zS4)ۗ6ibe%/E#=(T}W%ɂߎd]iaK'=_?݁{aM|cCw5z@_>ۇ$~aXjoț( Xx Ϡfšn0fՀm'>WB_^_8oЋwfyd6Y~  ZU*BK@ĔB H)!>[@? s2i0a>j ]TR/ uNWx뼲`h47,ֿ.d@O}ɻ?y~Poǘ6%NMDd(c0i`b#6idܭEG86۷؉^V-wb; Y=jMX;I=@ECWbk퇗b#*W J"HXā?ms E@:Ѓes-{6օ^x_ǫ3^}8͊j//t\HZ}yxmxS$(ck*5l(cy#o> n֖ kQ$NS4S&#ȸ,uPLɰMD'XC nZqת\FR=,$ .)R\ lr A:J:bJR kReHguFP$Gפ :;&[[>w$s)K cRSl'ZJiCy0hYظK%ČД_.}I(Q21vc~D$T٫qJUҀj9`{' TRgd={*XPJ]fҁ }rstm>ǻuY27iQv%G~yA֦nrq7I¯lfz]Oe`fꍀ,_os;W ًQ/xcrW8;b/>*̮ 7sk?Ïݸ_OFe3-zCE~YY`ke!Go`G+.DLɡk Dg\K[K jc=y6֟ā?ms^8 9zc*EJe 렗w_bF@P^OZMn6BAm:nX@:g675nʭ|] L1ͼպDg\RBD3Ąl([TH1!%򡃴5("Ӧ"̭㐨%ҭ-VBt>;[ A5p1KMe0(n҄XMS!2 >s!"Q#d|ز@1-{7aүMM!%{Ghcz! 'ۏ|sGkq}@8y$I#"$Xfue`FBG-J)M]gVV]3%lÆbdkNoᕀp-9d@%Qyd>=AcАs٢|8dm8\vH+;cRx}h q2,^{'fe` 廘ٕJqхY~V*^`P]v]z9z7̊lQ'FFUy"u"[mEȶHaJ['agA:V^c]4EePU= h1׳zS!FmqzAZV լXcT,h@@LE²YT]m3yԒ\c-\roaaGV3w_"Q* n]jh%x;,<گ>tjhӃO5HPOVy5,#Aeq:kh2_fYD.fssnHR?Ie3vZk$(M3LS'!,K'!D񻤄nK C޻/nˢH)8[Hiie@T4q,U Qcie*o ǷBH]6iAA@e( CuNR0Gڝ["_&"k+gOM"Ź:А'vG]t*AS+LF]{CBDJHuiW5A ( v1DpEq$9.JU#0m."!)A$FZVIU1Ԓ1Wʇ*LǕ~{Y'j5]O׿Fwv>Z]Ib2/y-a+{Yʦ'FtbD@{~v{Lq7ĜGt&z !-FwkCR WPZu E 1h9^|m؏ib4_ތ]AgJXW>hFWy%VR+Ö^ܫ-}bt^FS5(|NNn J[ՠV񐸇SZEZ!qn]&DPN¶a5bZ 9ћUSJcbDʕ?bGUˑ^ sˋFs?o~Kniws'K4N͟^*ř;Ǧ -̅Ӵq~~~wyHlP_쯜{ᖑ EQ~=ڴSpϪS7kE8䉳hOiztG$I$ո*r1>+LIˑʼnLIbqEiԷ]s8Z71^Y_n1ɾzv __xdixi\"O̯@ fbp&Xfǚt dI*@T4dHi"G[5Hı/JDs=d4,P ʾ^IlwBuzaU\N^FSb @E/*;Ў3ZG&[7N{ؒ@-_iyf(㊇?HDRpƴu;<5s hKNYqw!bZh{Y>9 xxm~Q7> mɀzF2tEϨAbѷU;`h%=֎`U `u5h֩(tw]Zk*TXHM5LC-YnR6.hD=%w =/C~t gq5]\yyUhST{O9{m(fh( e`jJ='PR&ve<+Z1wzMuEAQ1LA>s=LHH+$ VJnF^?+1}]iޱ74`+TE rZ.z|̭nZy,S\>e_RAO#1*9G§E)y F -|osԺG,ҢitЛiYteh q>lrٽ}o Y %\Aѯ*V +wNUtMH5@hBX"@ݘ9IjiLSPG&|,_]컨XRZL1Lpw 91%o;SdtcpQ;mBbm"j?aI]@`VӯZ=wu4Q=pUJO_:^G_CDXap6cz f򎩅)v>y7yweH5fS#x оۃfac=?dvkU%۽JR֥bV%ejRL#Wġs=\Cezp_nk1P-)W1<QZL SZSv/wQPsݒuѨwjc&G.zaRK5Q3eת5UdL}k4.֖pﭽfn?]@cc֖qyZz| ō^]ih1 uwR.U14%!3r/y#( M^4v^7c<uAtۀVoMa;Na~I$J$v9 E襝J"˜}{izzғ`BuiN"aG_@&j-ڑCos-m! %ǸgRAh 1yv*:1/HJ^òeKĝ+w1hN !L #<5A0d Z5}jڼTݾ#2Z30c֐O|p J|n w Ҭqu ERS9['$fRgJ2xlWx?{c$Xο\}>|WG>a:Sh§zMbHcqVd@piׅOi .ݒiF(J$f3( i08#r B\;I@!*mQd(|F=-42z`̧ЄٴyLY#V\xFI Z"OiRIiz[Xy|jϽ^Lovcleҵ4gaQxLsQLł #X'^쯣A4w//Ή$@Y_ϛ]daR_&?g0?EwV%j,f ;~>̅<dS_@Z9]fTFPRqQ> _䨤^R@iL9)lgib)[$ ^r}d$lP7d/a3~*/"Ō-&|19KQKXZgzLGIK,[eQD!*! oxps?pI@Ҝ%nZ6㵱{ވ\P"BBBCƥeR`PPkR @EސI ]%:ֵ* CZUzZ%[t+䲒qυ xg@PZk4Z0$q:LC E-TԤTbuRUϜ'8V(Oɝ4GQ .y0WW~$S^>9T= *YֺvU7*zҚUOWThPyD r-h ; R*!TnM.:UW:G^F^ktdF&{;ut Қh޸_4dh tj߸lI59\&Qr1̗zmof/;FJAOv3hwZ'X<:< ULB}"7L J͠J}G[C ߞM_/Ϣ43Te˳/>0x6>x_6uuC6?Xyux?.'~caL?O=}~. AM|kz37g8_Y_&a6_)͏ϻ~'ix<ۓ˿Qf9cFjRMlx+sɍ8F@ v6[?VlW{H5TknDsmނ}wgR|p&Q9e 7~6/_Ēj)?^N%Q?s O}xx1.nԢL񖬳2hGnrZ~5fϷ7h{e"S:E˵!"ʔG԰tE{Zf a +iP ƅ.S^C $*/>g_ӮXU:n4iMMϿc&O>"y((Fr Vd XXZ4;,ߜOxOrF6r%9t*7*O+X)Dy08Tij*Sg fL[9GTNNw~hGiCG_s?|t, i8)PjLq|hauō6-g-ye^@n9 $AsD)d˕zFQK{~gWQE^E*h9I! %\ZR2Ʌ !̱8j\p wš~jߩ5rV^>fm˴̽!_lpꓣ<q0*q#} ߒHPRjj8v,4/{T_:hƏSqoFeSC`B޳{HIe`-ObeT)ь ^eJ'Ep\Im4uJ1S[J1S6AJF/=F=$6~)~ׯ6>gIbp;8L׸/5Ɇ8FhfѦ˾UqTէG1|15^Է>=Z-8gopqP^NIʝ]+7+Rn c"GYx Y+ 5D\C\+ ;@V]H~KuuUhw@"D"sEDs""~J xEhKIPDsuO )/ig 5 YK[:UZ{G xD: =躯뾊{wPB&gb b5~4URG$>9^z{wu_Tkӵapj\SS %(ȯ9 3ʜ@YpidFߡ8bP@K힪U!`~JFFCNPUr œv+ҩ;jeز1ۤo}֎J"'5CXqgꗕı3i6z~=bbT}/i JmeC0c~bՓ}HG(ht;m&#%z]/mjQ-|,YpksC\*=$ bE.nu&e]j5mJƚdo"nEx]Z-LrZӝ3Mխ(0.[G4?9}响rT];owU$Ց8xRzNV4bi Dl!յê m$-݆p(yKp,˥5=ox-kaYsNۃxja!"mlqÃ\3] &,^M@ӶVФShj Jt/]hdE!ETjH,+k%bkB֣=pՇ))U1FŵJϔUThj9U(&P~&F1M8*(rRoctuZyO$"phx#[EE7%&kT3$3ԡ\3 %͌v>An)Dƫjn>i]!Gu[Ǡ @|=FI|Hz.~O7@f%?oI@Q#e5>6QDh޸CpJ|MWW+R`m䌓T*uh1IhʠvAR-{h}pE]qu");:`2nP6pNPp94n+A DDs8ժe>ƕQ"x+2QyNA3r*)F^s}[!_Ք,g)/42NJ-ɢ~_bM 2YQ4f.q1I?9``V-ӻL\ KF;NbDB)Aqhli>5EC. rg 9]~,e1ĜYF -mReE,IcD𹛎t1JGIj℧$T&I9 l௨ X,6 jyk?o--fgU_O`-2A>9].n1|j C4LàzOWS0U,(ɎQVMTX[Ehfj5 SAn6 =HCZ[kܵ;`w_B`'l^lL-u}`#td:)S ({99ĿgZAZ83?=(.5},D>_$*<)JLjA.j 逕P.L#=2 "YDZ֐-'0M0ghn&&wVS{whW3}3&.8Lxz׊gh5kQYǘR&^_E;?93~u74{`aW'ysl`tk? L&ϟp~MVU?of!l1"TژR-:gkSjBj",eu,Te`&Xی1Ô}==%Vm!&4bZ^Yk$gDR v` ]SVW$5"):&c(*V'djYϸqĽ7/+Y:*Xy1F&:UJ"lVMi SIj0;&Xp,1Iԁb'ng?H)Ν4ékJ?࣍GKM0eh#"aGq^oӈGq|x)WL\k?on@sILFl?.Ym?Ii()gAo]n<`޼5&YOǢS@rOEˉNA|T@WMMi b*W4L;rw|Z[/=yiJ|T t-嬀-ـ2|ꑴg9ؚa1ˬzB3UVWұK+`T@ƯAn_x1[%`XҫW;pVBVHv(9wݒijE~}4'$@v1A}h*ZBZgpbR3J6/<[o~L~jճö%Lۈql"J+9N?l'4wJg4c|tYR g>݁fw% ^ʪ:.dgqS)yY/ rWŌtqɉ>:HG*'ԝRw9oݮ8ad}N & w{.rVEK \3 wiyH_fח`m~JA=Z/ l+5j j3sw6S2!>CX]'_4㱃0iqQWB4ŻC,ڻv<>[_~0dc\7[Ij0y0/WxZ?JjsڴɼM]R))[M} 2hyeq p&zrvbY;‡HS'8V]Dd*Uz4\BPONZmB66F\0VK`2RLbY_9d{wd;A܂wX(/7%2.R̩XH%H8 RHǛJjQL9q20e,M#ieBQ! u}6X_hY~ V|HX RL9 ֱ)L\8Q&cٞr<|Uf ci23,I4DL_D# KUB OōݔhBE+Q"5yNߡd2*Id*$V'0spi[$f̳~ˬ Q,U Θ:)2`Yh,-0{ -qaav؀bv 9x~cF xke. )teW(!TuZ#il{OJ$V,DŽ 86!Fs@).QHČ}I }g [6`ǹ9NRmc EN004 rx= es` Eom}/ (t1_W1*C^In66B"2SֱKD̂g(NSXjy*Tx~ R9BP ,Sq˄RKin?/Cg+j9+*@RAQ7)u~1 vVLZPg*χ~5g`B"p_('q^V\*\!l BA UErʷtϚI͜idƑL=`'W7uv ƙ`E"lJ7.9klVY .FFϟ%&!ogI^B mKV 9U%zwszFpGϪ&39FG g7ɒ%H&ҤO^jSFIϠ$9i>2A>o9j(j9CbJC7^`nGvuؓcz‚;?Hhh>_f??nβ2Iͮv6=We|H{faCW *Hke{:=k\WU}44b(nQEZc*nA)On_'˽ [}|pɊmXY1r_Y/Ǯ:/FG%C`G73^B÷Yӄ\Wq ,%F74( sK,h92:Q )͗^i9EHd-]-%qaI5yD޾WJ_A ,k 5Vkko޺UWgt2LgϸqĽ7/s"1ϻ1@%Kc nT6z:v}*5d=3V:^ygպshj N: ;5uŽk<6䮝bJR/ v< (k:AX}L* DuP+4juTn9 i "dլ$"Ri"U8Bޔ`jqƸ)11SBs!M`ඓf,a*T.&J6N-)JH"QvXDlRx"α$,E u",D8vZ!;'58IJRS@'Y   -X&A=SJn}R9`ƇlX*iwvgS)!dS˙@,Qo/wܫZ}uKtU;K v KJwb3%{31wFlZ';9;[UwM{w\7aGw{0C'!J4@Ip :(C bo)47kuZ!YC[tGr.S%|ف@YĽ|H'ɷ[ ?V]/VՍ }WuZfa6w at.z|Fa!ZҼ Wmz|j?7S ?ӽ'Eeu/1U}]/ ,Q L+h:N58\օ9pcg+a͞29h8zW=z!= Q se.į y"ZA\B}%}x+usE~ݗY"uSÛtͦjǫa Wd(&^+3 qWEZ8Gt[I8JS. t>.KoYr7$ 5z Rv%ֹwZjM|8.]6Q*쏽~.*z0zkEP<"\BHRhՙVbgPcвsa7Nw+WlYuK+bgZ+,Wpa*] ZB D5^*.gADA+Qw :B=efbf7AုjׂĦ2M!EU}}K, z՘&pI}mnQ Dg^k@"4j8%CRVuB2X;>@V&`DᑱaGP" Dݢ0췘ļۧ1mIK4'HǼg[WyhEҨGԡq~&R CuH$fş m{f.[s5+)1MB]ʽDJ1 b(M41+ͱ+ü[ﷃ}=W_g^eIV [}*<^>ۖ*]W7odhr$yozˡU _?yXUʋx{aCM"NP,MM'^RC 1̩Ԫq,O"]x$C !: %Oea%CIE0 ه3c}5c~{~myǤ]"2yM{UUvn@G:/ybPAl*h- '[cbQa0#@ y <*b``:0953gd9 `Eв0bY"+Qx*@;6#n iI!S/hji ZɑMWe\HݪР‚0uJ!8R{ļ(AVJX jS& OH +!UʨPKHBO},Pci.oOq;{4$TZi[>=pO: +|)dZjNWߐ/&\ 4VpuAC끲7w`>NgKKCjۃOW# A@/f:f4M;K"g(Z A\&K||0y $;K.$KYF6  Lj6g$R9fGWFady#EDyͤ\YhE T f;Li8mk+Y>qF4qyD3u0|?dwrPSJK$/lƼFXrQAϞ,B=ABY~rGԳ<8<&lR/<(&ڼ]ӦsaywMȺ b2y|;A^2҃]]l܏7W vsT7wW1-"U)&@;KB̦f&_*&olL&)4iMw21J\*RD F4:3`:= F8O S)$f$M´5, lS' h9ʭ\Aۚ{eʳa֤Q6QKni#sWjRt0Jd|Iv[Hc\ߝ;1Ua:WʨFr\skRH)M"P3ϴ3/oYwhYGpcv YL9wG^DDQV2]pES&x~ Gֿ7;/>{0 'u5AW\(LB1QZu5 Rh蕐[[5g ̸>bH %lpB+0ID, JP J$wIT&jڷÄIθyuj+" '!JԎT!0 M0CDOdU p>kQOf QKO[GwT\& S~uuqp/cys?,{YM_Z; SWNꓡ_ȴT_r R]r;=Y+cad(9F`Nl=% 4>==%3Tb+cR+)=T=~پ7},s,`n?Nf8k?O܇`~3\[h뛊%mŮ\~D߂.C><޽}t:8xF2ɠ?PrYS3wĤn3]Ol9~&Z˭u!ګV8p=tHji6I$o,?JI_Lsn/D&-2@s_:i9y 89bQ$T8g:>R'K֢ى"?o6D5;/E&ΛENW1ES;^Y8n~d1v~\l(X˻js֬PΠz$9!U*MU덱rT0[*[^xU `ަ@3ڵïڀ5e@H nʐGDj?f<՜1|ˆr\뺋+C!T3p $MD}+G @.d5&{^9 @tVk6B⌐KLR(9;q WFDv7e9WIx%1}Rʿgo(Z\:*%I7ݼb;j%EWaLӄQ&i(aj Ryʼnb'L߾\ E&>% =T2hkFT\È)}cǭѰN޾ySZb㼢 |]k`]MyAS^8U^XjGY a"9Vt95K"LEG Trr#M}eGqͫA2Q#_$@| [$5B,[BAkK aYsL+]mo8+F>._onoۘ4%QƤtObЖS޼|mY*=O*x(̄Wp.7xfq;62NUр9DUр;../պlkd,Ll?{w v'O[np!]1EY^u RʾS`U ~ VYf2[orʀ}B5Ϙ4?[gVRfT^AZ|X5(X5S_􊧋6t%G{4ͪ6zuy/ڐ]=VQ&*Nt]R߃SrDWNmK}}h[jw?۷\AQxEކ z  49 HP*i4(҃׿cLI=6R%16t86DC ;ڲcpWP!d8,RE9)^t|' ^Yqߤ3v{ 4yYqN&/(?/޺Um/+^}, v!%,n/PT~ʆLv4PR[-#0Z*bR[[J%p x7 bOV3 M!X) rA$c_7BgS3!~ÆKrΑ9`f8,pp6d[ aFc\gHRڽ -PC*ȩ5C|u.Aɴ}=~ų3nip e!JϒWh%(<~JpKM g;|^7uߺp^4A2ȴ6gåbO߄@IaňPPNu ^ W~~sqUo.їΰ]}9 #JI*RLZ !RHaH11y" J.ksVҹfϔV<# cmQDhx*U3)4(@6)k}/H3-HH`iyjE)%J '&ARf`tE!hFH÷3 0P:T˞Xn= cw[wAn=dzyXG$7U7KI@+#}(K;NƐek:VNmޏgn>}}@5"AdX&,3LԬn( ʙZ)EݧΌ7i' }8Y{]eR^ ź_ܹ'w6[Pˆ$]aW ?~h}$6iE^ܭB*:w 3(:q{gyo_W֩ܘ.wɡPR7Yn_Sꕤl/6bf' 9:۵i9$,]R<`GeY1JT( (F{,! lm'sKmǴ~K DncA<㼉cϬAXg+hF1T[3B]X=Gϳq  K bj6Fa/ Qn J)Im: !/"( H1nJ43$3)D KM]fH6p0S=@LXc ф`%27JdNq JkjeSb.a1+/!81X@s.)")E8O8u[hxuYI dE5}_\JBTKE`W"OOҊ4)vʠ*%G}gnu|\K/>m.c'-j_))Hdim[Bl) WHC6--Ċ&5yj54C?@iyG%DSX B>I9-kt}{"-АW:@ SRf OPBQ1% QZ +. 1?+nڗ 1koo쬑6\Dq˽P?]ɶQugWp=>~aw+Da_WLd>4^W+&q3ĮZo(((M/"RSmBM pl`qerňdZYÖ/UlCefR;nkGƜn$众ü.Q}P9;i,%,Tc@K2i-#)A?V]nܫ? + 'XYqpID YP"$Bqr.r&s@5\~+ۻef vLTLi'Iu.L-ؘ%L*(], LD9E(H$)rAs5gM aۂЌG&w n:nn|GJ ?Ad(TJl*Y0Ƅ::^8ϐ9a?H`}VLiy_LysnNcY|\||M)i}|_'6{n- ۻ?9Ƥۧ܋+(n} \ىyZrPjk7^ LU''RD_3Ws h-zqlv0_=8fUݘyFӂxϴ#Ջ֡(':Oآ.6|Jd)S9+0"@SBK0Q27)%L'ca ,+G aͱsM{u>~Oщe3;0vyX3ȱP9~p1%uY w)CKީ _:u>INB\qJwSlnz1}' o -&o6`aUwuDƙSM|!5 F65}N~Xftq]N_JzRN5i-[W G`h<ї&~Yhh\D3 O ]=8btT /?1w}+%\08RUTtשgRVH!>B R}FB rZDM )gvRBC^Fw`UeƼS#ZX BT'MۈSn X֭ y*S6qu^ }DA>u1+YD@օr)TuNºb:hbƌdm![U4FO?V6hM#A0rvR()f~ $4~HR%(4 H`OONGʀ ('~B#A1~~~"xf?ӑ'(f?a y?a \_qƳ0 vu40 H: f?ӑ`$E.i7Ř/)q Dv#v@ۄBwbj9?$iݶo",l۫wTx]"# V#tlSWHf R[Wh ^x^](/%{R ߮fW"r̽Ɂ?{7|#+=EOZS`V_m2o':j3 99bY~査 6!؜my .Rei ,PeZ"3)79 $(18:ń2D=9F3I\39b2(<ɬeW_.XISCRQ:!3jr<狜3H1d4I4N$&LeٻmdW .x 0dv 6 H;n3~nM%g7[*~U,u MɔFOJI`J2"cW"G6QH0+IdX kǕ R;C(Tx$r 5Pxs2Ge*sZMVDB-VBJ4lYQqXCPI;D`r&3/K0rWQeo 8Ϲ1{?VqRO۬MsF_O[۶ ~׏oWr|lC2એ[Z=$n9 .~~\wB;9&{/߾{U.yyqW,cW*q+PIxjbM&*<~BWa% !E//61{x0,HSLfa$ey.vNW_wzɭofn.G '@՗tiFU5:kb18ܿwm_En+UG^lVqC}qY @(:qvKwKΣ;aQw'K97QՍ5pĨphowJŤå:gC%u{ `Rb^0|O${v?F#*qc{j)pu24WhR- "Em3Kp0{'Dh$mX*-V"ŸzD$CQL)':Sl<ٯT RfQbVFrK;EŶ!;RRT b|㢹ӊmtZCdo!$$|{vcBjT bD']tvW)zh&O4V !!߸&8ODkjT 'vWvD{e !߸)Fdjt8_t'kW Z$V(A8KIּSe͐”L>S~IɐSl65S igtWSs2N JJHQ #Y;?/җ; Hu(qD *qdٜ>%q!^*xIZZhZn1r-K=(R`gT`l#JI*R>ଅBҕ ) E{Xer+c#^;ҫ*Ct}-?i;j BIDhy>]5VKՈyݫu`cށQDz!cNdYشEUq_W.ӝՀa*|:g1[HYP ,)pU==EUs(%c ̗QN b aT exvT]Dž7<;d!A1Bp1aZwZDD#Z#d4ì9 T[ D!P:Ke|RPfW0O$B4SW?M$N) J  Qr8Dj% -Q+&N@x#-2vEb'r/s'dA?-1cVuڮ7JG_;| \< Ƃsf)غ庸swv-ַA1nzjpWep7iUasF[1FfʣY,x4o;ԢwJ9DcZf\M܉lW&jp[[QӅ= Z|k w[InH"5ǯIqxiQI-{$pnCBq 0*MO!P /W6=38 Tz6ޛdfsIY_UgtvtvLF/A{u1s:cE9D{F&N.(͋"r2m+A(d܁R4[$_|r6}D`{:Mb$t1w , )oRlȔQIQq0xbVը%L# <~!~m>Hq2pD%%$LzUuC3pSS0*LM\jODUE=.p I =CSp{fH S@mH&W%3U%yB[U.@>7'O^2dFt Ux&+W{Q$G, 73Dmۜ 9m r6Qۜ6'<B9wa_mFggf3C]H+8a*9( ފ[hCHA5mM 8N7(z=@zKD^嫣#zR&'˄!c|vJDԅ%*1 3UQ6-mpc*-aeIq Ce_j7Uw>,;>18lxB"U ?bV _]a':m *]5'=LCSJUkJ"k!C" IfKq^?-(TS@0XMVDB-VBJ4l`jPu_͑ڗ`141 2KA)T!K mE]I5!#qWX)RSS(ݷ=J ,;<oK4K,Zޱw{ i]w+^Fd˫s #JU.'VFǦ:Xq8^$s财 l&cn1<5w|m_}'#rHs ia88JZX7* ZM6LDEvFAr'vuI Hoxw2*MY%A1&.tܼI'F븗_Ξ/T!BeJiBߍ[IMf)WX8Cb|o7TV[;}:0aY(u SQOc;w!A9_~v Z-6g1SvqB'gWgsjӏVRB`E\<Cila<-MEb1p^@4l~$vB## 0V}Nc )w" UjWNObH-"36ߝAYshwz?OL᧙I尡C $b\d xX~2h 6jiXɳ A+i$s4! ̆f3-)LfMuBUsS-݌ x&!f"aPuĺBz9-) o6IKoJeYVrUK[_qbIX v a!cemkfp$#ydqhdO$5sxxyx#F DnBY"lfLNykM@8k/t{} _|m݋1;nweN(c'8tf^2{ uŘ6i1 w:5ৌZ!ze҆6*8QbMNN(;#LN>{) d0S4S(lXfEE&4Ut1mL*7Ԧj67myq;L > m +F!!29X!NeNcZX|# Zͺ(WJֺqOJf6e{%\vwю%gnK`z[(cJlrB[)^ΫC&Ss_f?lR)""~q| JKnquo[}kΆzyhW6_-.aߣ0L30+}ƤYȇ!,r+Vሉq0b4c;O_mQvK FtRhw2A:n"[E4Exe'OĬ-1FNC-9.vkAB^&TO_7j7P[*1Fv=+hUn#[EtwR z$trUXa<\gzIgnԖՠʦn̛n|P)鮉x^q0_3c̶q=Gx;x-.&(* m)U# 4ęv ̺CQ<(lzi1SC{Y8j=(VUMw(}Mc_`z:[ i "MV⭞z_jc8 >b_d8zv7^[ 97V hvl.Dm1+SZͨăpQ݊j-GX=o5kTVasnS k:QM[Bt?7v5uLbdԟc>j(yZX$ qRiS% fa)AKh$DcS-#TOQ7~d}u_`N*e?}`,?F?>DtɨR>9CRsI:àNg7;3I?USq8USqTqXP3QB8/.R#12799hb< s!Kr y6zDVR:-/1֮­_ɍ,ЊdEXb-.8㰎::UF\qH$c\N8W(92pxQo -© %pjkA"E۰R_$&a#0^jKP FĴVj+5&B ^5Uj/ 9s9cް)s+R[fzmE8%$H >1%SW4TWqbJ\0mV;()4RZ2+45a~ȥ.U.5,Q-(,N0VƒŒHÜ,6zIx9-L^Y [K$c.D4Azʩœ# y!(u@V*4'zrs€-k&%(zDb iWv Qm20P`rU>~D*%s]RR6πto@ >.^yx}BtRPÊCuxᧇ 6֣;:Y߾:?k?X6?8 K#(=ozjzp:xyy2]|-^ILs;)F=k F{m,yXߍ}@UF)B\ fttӳ`Ts`)I< pOz:uXw.a/GA O>^Z)98z{NA9Q՗VIb[?kFj{VIy|&f #% N*uT)cx}ሶ& #kƸ98&#- krDB)!HMɣ?i=JRs+ytIӨg ? 4/k'_X5!]݌Cu60FN?h~|zT-Ã%u~psr$ i{'%ox>er/W7'FV _y{2|)3./'u*~(dt?.#y?u@68y>}JBr,OFKtN]a\3脖54@,BL?ĿtnK5Q.n"R)VoX? N{K&2~瓀^Y!tJu{s72H<]?݆Ļ>*6 NU~;aLw Wu]-f|CP~h6|^ɀE*W-0hut¤gn9$Df(VerV#I{;^թQEKW ߆ͯ.>m~*o. \w>۫O V*dyW.'t\S|1q_1!Jz,xE*9T6@yQ؛B)gI4bՙ Lq\ $ / g|_r LuI}4j_RD em &՛ @d7_]МJ}I5-UY&=XH>TW00s>ah GrrZa)'ە&Q{WEyH{Km#bt4ef7-L`Xfor{cx4}7bRh &3|bړwdF?M``ݺn~>!!8{y>e\r~٨䗣>QΌ)84,D)nv3s]|;O]Q.>ͺ r~rRVB5q6H/aq6c!BtM~Q}tpaJ_r˹YfH$19OT4i?MшQPL*"^oۧR Q$Gvѥ NWŏzOEȞTz׻ 䆧SZ R_ >r?$Ř(,k1cznAn7my%]5V=Rd"NfDgn9%upd( #ar%3i2v]!F *H09E)wV[5u 26#Ve&4ȅR4+6!6zFLkM㣷͜FԁuןmuW7oY1cjʗG E$ְ B3VBЌZ5>KLVȈyaC2jF`.oSD80ۆgj~9ܝbG rD1 .t;GYl:j8UG0`,; '*\3AERQp!q\WgWn~f ЭR#fGai"l~(fPY fFsɛIE=a C&v}BbsE{n0%]`06 I< v/_ 5,#32҂Ӽ@+6[ {Va iJ  XGM&.1{1O^9L@[EM 8"OH;Zĉ"YÉd zoKp؄4[jvRfS ʒW  }[#^I9n=9}mg `\f ޸zҠs{R i"K#kϭ9oS2\sÙ0i ^?9#+u_֋?? &rU=?Ift[JŻ.]>Gp + :ƞ0"(u҂h%IgBrA/&q//DV>(- *m.!Faíx0S9F ;yF')@%D%n(Vw7㐿mjyҳ(53}03w?]>=*;i&|DO_xNN#wTx ;| <ADdgz%? o/oOp͗|>Y{Y|6q2.s[i0޵q#2\Zb%$H$07[>F9.g4a7D2.~U,Vuy+A xՀ~>-R j@A4=" y{HtnF~Q]s3m2P`*٩6qs ;tPc@B!H(G]]p2ñeR H@WfC8k@yuC/:b7fƞl &޸"t(`U5τ?}S^V'ܛHxdB>DK 0 /MD)d|ܟS(̻s:~MFqnڣ|! u/Cm@h:޽A^j{(߿zzx[{yi}l׵a~(W.>zo.g\G푽}]BiJEh( --$vPG,bT׍uu|5dkk*J!u##16a!EHȠEPŴzYwblM\(wֵgxw,A[] 8XѵX,R{AY݇j3.B/ld-3ImY++!4[JN"F $S /a~ki7(,Hi@K$Pkh ekmJ <T֎P5z':Spu\߾lӚJGzjhp; o0,|_7"87EwVW:`?~ekDmBEbz7\]}YvFuկ zط  xmEA(hSf)4Jh2w{&(ɡ2azq zM\q$=i<Ѓb*gGi c#59P,% ]>g 28w~n[|Y}oF &HB^@LkIgg E5<fS;(AAv8ҧp&HG_F=R6(a +!(9U\:G+:qֹUx>@]IeP߯f;o27o0O;d5KSFV䠠.Xf@McdnG&BtP5Ej?gcfv檂͋Ȫ{X.tjڇ!cy.gseyXigoπmgh r9߼bUzlol(,w+WX߲j1A ejrsn׻'ٻ?]hqVMQ$.Q2%mB߄f%-~avCB"%S荹ڍYhX BD'}ۄwT0 i@6E4JM˱vdi,Ac7znݞy|-r6Et_0) DoԃD0(M}HĴ34 P'myW~<v'?nmd8HMubpnAT~ϽX6j,/?xmSVn^II?fA3a*'ŘFukA-j(kZSC¤FTLjv]KcFD>kFz—6On<3^ w Cr,k40{0Dƍ O! ݍ[O3*va "7Q ^P#l \/)}zok7n"T?ww^-8ڱȈSzp)]-bYa D bM* $\ !J%SĚp5 ӄL{8 sI?,tS%koz bEOIFĂS5U/ fEH@&߻ ṿal8Y2q0Z¶s:<8'߸Hl8 < /jσa^Z/*R/?`ǩ-bx*r4"/7HsG"K12/2`tl~q&Ort_&Č5^3sؠh+16/`'TG  u>l@\J1ƖUFbpjہYNϘr_ʇ7ӯ5ߎYĀxX*+ R ) -k2BӋaS 2Ȁ & NȵD~,nqx҄4(Zx!34ی ?kH67?U!w|c֐36>a}՝#"a>n ->, zy[xUO:іL$Cfq~f"_o`Nu/ߵ 촭4DX@{6rr3:;d TSy)3O>F&jӃ`X8,vN[ Gϊ[-6W7_I |V<8xl~6 +JNW>#)&L `%ERH:!).4}DB?bgd*X8otHJy;#|ɝzō Q6YrÌB "P   (nlIAĀrJsZ#M 1.y4Q.;RPUj(b@jVZ7VR* ֔TQT$z~-bE5{ dp31D1[BHq*%6z#REH%&KS ᵅ$dd?7lnAPojn5x*36>C[ !p)ŝHT<=GF̣C}_+FA"!^B":\D|E6a7kD&eB/LN3̦M?|ZxUcjmwS??e]|.>TYͬ ~C- kY޴(!~$*SXc3Ƌ0̟Ў/V77+Ax""Co#r0*OۭdD<\>90t# 'уAr%B*\A  !W85Fb"!K<:J"0˰`l%I9bpJ)ͼ$?NIsus]3l+W]X ^<2(\qHBFLFɜBJ@EQmXr.$p;b\$7h'?`E.bq7F=5/!)-2&Y,043LP3&, t k<pXIz(3E.A[l'm5x0.DΣ O68lyt?þƬ!'t~yta}"nHKfGqK dvẟ5G@21 s1AdvD|]Ï̮ l'c̮P+@$212UfW2H evdș]=%(`V'!(#R%ࢤ Nل1IQ9X֫>7S)&~'CokB/HrFpgC?dU Y &hA.6|(nmy3ZB$SuGكCw٭G2Zw0jb(K V88ٙNF 9w`GL$ng=kC1\8,MT]/^J8Aoȉ%/ҢP%3X`8Qh,gFP--E)ckaa3Uރ,-0Tfl/h`-cʟ-!x.t+"qܫ!,v :S14(@YJhAN"h6MαxF3 skߑ7.$AٳF JRo!eưBJ0#4,T֟P€^٢=KIUW7xq D]pj1%TIR)cL)w4aÏhW0Ok!Ɍ"0n#ΰB D:y' xcg 9Z*orȨ6,AaaWIKYF4sa֣3^zp3s6c|Sd3UJC+b @ ;7leE-!:1 I*d"&YRHf5hRr+(Ժ].Bv:qk .ex2YYf O@9ؕDswbliB׻O4;Uk):on0fBf6t^c%$^d[υr!%NuHM#\o~7xy8Ak[eR[qsbИ' !" FB߿Dv=?>­]շ\h,!qݺa\T ݪ2ctGIkaK<:rMU>=ߖ\F/S{KEvo֢P݁F y1l}"EJG.g/ZL&z"C޿ؔ J()WnWq"*R|5PAn2ӛ%TP{CūPdzF $XC*)mnY 8^s}m4b +~Oe(gRޓ@a9KzȔw˞G t|&GX"ͫes-om@Rx;" 6Z!Әޱ/AmgA71";S$K Tq;.ȧg7{?Cr| xp,\zIz]=|)Qwe Bz\ް] n \VG\u+ZQsJ> A)u A (@ vMMƏփMqz)|!i[9He0 J& [ s0A\^$X$8(k]Mޏ *K4 SM <,֥#u9Q'Pq8L"b'Cy޺\媷.W-`46L'/v~H, &2P,c*uH3Lߘ J1=yͦyWf>:vb,[<܉,Yό档 ~/jlVx6_$ANhTJIt{ |nP]]ֽZ'<{~\/7q=V#T}h}{ YhSy)adxyq&; Ɯ;kY{GzsmR[Tzyy^εQk%fgVGl9'ݩIWGJ^)ݔ(O-$=c%u蓰X -ɐfU (0jBaI {D"-MibҮ|d; \}9e7"ϐrHjYjBUæ=Kwbe+&^C&zEdؒ` ӽGuIZru5RX4 RukC^Z_N/ jw"U5i@4Sm">1)fD6lj,aqjNm%&$U6E6I-ř`4&I#"܊=ZA [qJNs$cÉe4cD`RJba$ q;ytUb fǀIt9 &$nȉ-;`^! 7ȫg[-TRN1#4LO14R 6V{^?lr!SU\I&ʚD jDI qja(UF$X[D-`CZ-8= ZUy1[FG \hl!Q#k ?4QGj6DiIGBZͨnj豨j,6PYPOeKIT9ĔF'f*Ƞm 89]+b> 1^AAZxA4y j.Q.ޯEm4f5|`:% bZhi;1ffY%`yp`1 ifv _Mi񭦂P!~M1;U#HLw#8u򤻫*_FNqBȫ7 \)]p&n.q%2Nt;"Ig->kCp )&HD&u~bhPSc^deyLN8uqqd/Bv%"۷_k¸ۚr&;ϏٽS9w+}QPE} ݾGP6yxb5"Ɏg9h ;EH.,HVJ$^R"j>9<IP^ujSW>SJM{K.f4%aXQ(=sftyu'`h]nvIƳ8*},DLgo<5:jjs1nܮY_: :=nl -:AyM_*ˌa #M68ảIc88X4〙^-FnG_OGG_O"7OZ`C~DԂSݪ[drJ@򧩕0SBWb$g$RJM*M XeJq(5+c]{"* l#2`ZhSo,1pb%83jP̑tooG c.R֬oNcp7vpUkj-I?ijq 6UAZ:'5҉誩߹n_0/(*S(. ޯ`zØ,YV}< E$[)lڻar_sX~w: '] kh1$A>עARRp;dV" !Vf39ޱwbuo ʻ f8gĕ yNsL5{ eqqoN=Y[".O1L5!PQ%KR0mp%"^綫SfW] 7\mϣ=E<ˢyq3G]+LZIxw8D)Z"/I7OL,_ZSq碱esϥV03肚.^挬mȍka`Fቃǥ!($d}R^ˆ,_ug.9pVedL"lnYf,X?$D)-:DLB?qߔ4UBud:LZ-!IL+9{irT པ3~٥(1,α4}n@h.4Ѥ 5zri!":*b2X.Zgylӣl/gkA-X%6e88j#{xfjf03]>H4R ݮ'L$ bNa$mLG$H f(з%T HqoCZaxd!cb2Ɔĩ3#1b &R2V+.hG2zu}fURLSP <)Ah*c%<JSmP @s2 9ճ~jmRέa4F l(2ՒX a%pb}Nqca)rԄ;I3^9d]*KU٥|tP>n00y5w-ΔWGy9}K˙n roۯW^?_Nq}a-{xK`(7DxX~~R|Qx|b4?Bw&|M~gZ#\X60Hs›_=|!;9b;P!~a9z<D#16M%40[#)$Z:NIA7#) 7U)̘ñZ aȼzL18="*R>\g}x=Q#.9yӭS+,t ~@Z^bU؞jIִqʍo=ZQ.zlUŻU5b_-% czi;Oz˫;' ֫(G_vˊK;' ]Bh!TexnRzEj }2&&D`LNv %rk#<#L( k|կe6[b¨3}(խ&LUzSvjR՜_n!(#zbHy_Wpͦ~؄)'ۀX@ aTQqjgZ{#_ea8Apw f쇛EC)ۏ ﷨nTS;0[-Ūbw[[Ԙ&.քqΰk!z㐉 C[;qCBqeiGm6Nn\iP,PF¨Jn"WuX1%Yd|jd)xNc[D8rKԦ94>Ǒ7L)`Fͣ #!NEO!;EwӛX|xGDzowb͋6tkzÜ'1&:9Yy :!X7[qXO,_J4nز]MPmJ(i%|',&j[DJ첽7JnѝWᲥyʌzK=kJTl;ᙡ՚|l=^J!S08I(Vqe1aYkoCQr//'@ɳ|1A/ŭ]A.mvV윴@ؠ'댇fej0|+^Z0O!cX'§"w<I߼.Ϡ =/eOYw 2?T hUĸ?nnƽE^Sopv)Z->!p5B^ol2m>A.z%|M[˛QyJ'n®ڟo8x7d 2m6:"Z 9B%y]]6 IS5WH9Xz跖=|!Z zu)@{BoGk+?%fsSdSBcr*IִmffR_ WCǴc^ Q߶h R\wkGA`xm!ݻzE1l<. ̷y 1Y`;^A!.XJ԰ʸ=XLhogzjUrcʌc, \nn]_k=EZp1ƙv9$$G.Mr$G.MrTN\!S$I'0D +)fyƕ†fR#tF,84aEO7Oz6ҚjQhCѧ!k"[EfR_Ƌhl?Z%UR]I3$ɏ-r#ȍ"7*/reIV"5gN5 $B?Z)U׺ʳ꺙^dml_sKVا[`9I(cX#L%&\LbL3$l[j(vBDN$ql(FQ0(26IJ22h ("Ќ՚Y唚D) SU]r` 0>/i *L*wQD6(C 4"9Ȫ4:I@']Ci3Jfdh2KaP#aIE'0t3ux,*:shb,ԫUۂR.͞n~b#S2xU׾,ZןvM]N.g,.KÏvvvX<\^\`tNĹOGfgw_tE y5~Y߃亏~ö1g1qZ!/`W5/ֳvuWg)#.߹KƓ;5d <:xo QD>}7fSsv_⁗0J@ $h@ǘRx֟;#)ZJ|h*wj`£-@u*< ANRWJ֟;H0rtNVGS#03 q'H` Y;5ʁ&'}'nWϴl:o02㒻5ƌyrrk&kd*"y UFUo])mHS7<(fTE<8`.sT@CÙS5$F5Ѳ?u8s0K0<~E2f~]/;SBtq,mB`Ag\$- ʭnoT,(fB(IZJўN`Srw?tfxzyq!l3B33$$If63Xp+Pf2j-#47`'f* qe`X ?c(r)89Ia5v>SrJ ECOX4B|_XGVX#ZNκ"ƓOg_ҿvžҍωT81Ί4p|?>|<,-f˗c~j @QD |}@@ O MF Nk?Oݸ$< 4a zA Nz-nAeSjyދ6A򸔤Uu{].5r8XYeT,;Xa<O9B6@!R x7 &[ó1Oc`[JXhxGWKDALQ6yR*tC̩6CXJD)l.;2gF=00GkS r):^lg٪. #]EL~LL"cf21dz3/tXh1p{(sПӇ3t='#Z?سCb9]>}wϜ㧟]'6/.X5`og&V]H@:VJ:b|ѺX""$36}%|Ɍ?R Af QB$KLATpiw2`4*U_w*̡k`C&ct++Qfs dڹ 2S+ĕ[>DmhFf<{B'嚠X :rbTHrα"Y^]|NfwbX_bKD%>nKR mGns_ զ8CAzcH6\t2oL^8|El[nrўx .6uWn ml 1DWnZ ҜR?[l;; _˅`3Һs B῞G?ruчxSϾV_ql~Md= ۦ mR;'p7ctJb`6avN>< )$}jQUxi<ԋ=Dy sA FHyصD*^`EYjht:m BrP޾z;lNt%"^<ɶR8C&FeZDO;I4{[OM4ek豒,p)9ip|5"7S4#gRkEs&Yu*i"A +`±ET'I6ޮҨe)mKs ?씡ܻ*4%MJ]q%8U^Rc5zF=^C][w{dŹ|sYbgw> Xޙ(R˞V߂1pY;ij?[.mCh]nR⧻hӳQWnBۿ+<1Q'A Wғo;oD{CE4H:8.mڣv GtBۨz]'mk<ڭ y"#SS9}SkDjр}T+f)|*3r/\ʇ(6eߪ߉B!CN^ξ$҄ Є"\2W_فudN2#e:3WOwNn4(EfR5n4Sa2 U E7qMr/š3A*OKKR$I'0+$ * 0jLjΈE'7ǂ@I(DP˷,rZi qܣլ5( =, :%%i Ö fr[;d,*!OWd\z(4pEaCǸZNv8ZńC\fF4j4_s6|(>69NKɄm]&$䕋2 WD7\s}PUh{3cRJu._c@`.I9n~媪vEn +od 횗-l=_g pgz604#ƺ pӝyRBraimW[=,5!ថs#÷kH;gQ?=YJ%~uPA̟HT$83|x@ J.(pr7Kn-lK:1wHsv Ct2VC,ᇈ3ͭwBBѢT~o3NRkKէI[HW?Y;hʑUQJ Vo &\¦wHck)ÚGebw9Į Ks6!{meb`Q!I̎9tJxv^#@{H#?X( [DY<;**/h Qhx4S_Nk`)gmcw SnTԢ ƚd%-F7vةF6!wUzx^xSUZbꡱ٤<Y`K,;<1p=h]=9TnTI(9 j` ^\==k{]?PSVa#RcTȠ!ꁇP/&A#-ZL*Z3_6s*&:d>6G/^(3νam"")ՏߪM\jp`@u-X70Z'[KlA(twoHxk ɨAeQ]o7Wk+?wqb͗ 6ɖVHNC+Ό잗=X-~bH|%A+0᳜ poRqAyZ|#D@57OBpo\LB7Km56,A~3l*leI?ZgiĆ%..5Av$1]?793S>G7:3C;\j#( N n fy(SݡS)$;ɕGN;jsB)IPaݺG==w4I!.q"rZN!V45 y&eSㄑf_lz7r1H1w4ne ޼[ޭ y&eSF'ox7-(b\ RL'MpkifK-QwkB^fԡޭwӵ[.)&m=]ڻe+ڰn'M۰ќ\nr{qx$mxug>>{޿~wv7Z;GyiYOyBAf>P9SЂޞD VGVg_aqrݞ W2 [`d$ :뮟r]U:gu 4^juz9$FvH>kKA/ t"^Id@ӎ89U O[3gCM>m*"kr^b'6{#j˧=ݰ3"6S0*;my- vw#>yA筃~Z?"uO 3@| >vÍ>MZr%hO| +\ m}c_8C(WML[5nQZ6bQke o\wk+r/~o.lݍQWs Lu5cWJ 2k$4l/◘̧ye|v5u?2BaO ug8{]>{-m!uBߑY Dj!A2*1w? SJ79aw&b<{$EMw"Z|>0xC)1̖CB_{{(9 "oߜp&OR݇iT8\ xTu0©4cNϒڕleN*q*xEjf컚!ck҄4)Snņݵ{EBPXSx|%,g( |}稭S^wM߮RXcjxWGŧszJ,<&h&a"@rS5ВЂ<3\)BξٻX.HpNT~2 F[;!>/?V Y`Hbl7bY*Όaۆih{NE)jF8C;CϠ'v) ϱ'O&c@DJ췰%] VzwUA@d)T /:rUZ)x `])dz%Q9UeVR tT)Arʧ< 8HJ^CJP" <|Wi =Ny]Z!rXqyyqyyQ_^sXBW +NS)ISi<ѡR-]şPmSUIn9,/a}l8Ұ8.y]|D)E .b wQm=Ζ4X"JCDk]c( 4',LpJv"rR'w;kaۜsx-zyc[-%*:j3} o[+YK^HA*P'1ĘMZpZxbQ}%b«XeͭI,]jfvZt#A"Zȝf@zbKHLY,D_ZRiE$3$dh,ֺq*!(.#L4A:іkW + }C3,B sfJ_P*Tg+?V^ J(jN DqB<Nj·KuPqm)>TD*YAM)~0^G4(H0g z옯@PpbJ}Ŋ0[O~>̴M{9ׯv isr ǐ3#<Q i ?{כ'⇀[д5|_77"l|rzsxߦ?A]GA1ijK >OS 0߱wo~~&"s)&?qzPg0Y`,P/NM~٤r2jZ,ߞ\ 8MTÅw#\rf3>vonYQmCУ)qcZ4rMs=+ OZKn۳'*\f Jb; ׂvw yzZd<AZR!F何`97FΜS8oNnm^^M m>zv!F[m1 19"vSM\UmEa1$qG{p }Shպ\($ؾٽqP!P22V(n5HIiq=" @*N2!fKkV wa+Co1-`ǪK # u})80+P1PhR`+-q:bR|$f΂ `}`qNz*q$cB"c'!MPAuz:[,86vUJ{S;&g:\-n<6<Y\4JAt"@{ǡD72XZbdH bXsp,y 6(IIC],=KhNmg3 0D.kou#;qQ)|'dǣ)v)IM> f8q&q?Y\?\Zf㏘}zuuxgh୒Gˎ: MR~/DܟDx-x&f a~w5"U/T@땪 BVZc- l_Nw_NPc+"hA.7"CF7 BdG+|e 箲tY:;)hN,j9fEMFqʝVͨвb-MJCz@tVCtrF]/-1JXTwht>lJ( c<oS𴉲Ԭm#?Q`m N/hU2jtۀ6 ЌJpZKK4fӵMU24+;xvQ;IԳm#bx0h M?auŎwfmiU<RG =B:W;G}'A cKf)bهq2#tB'm#%FQ#uOgwCFH}QSzrgcIRYfM!N^V>|´!<"׃SĘY%HAi! R:>zKȇ9l_Aū`ñČ$蓃.4R7@7Y   6*q* 7vx~m%I"ztm\_|:K7(D^ܣ\\ۍ$ : njWؑE\*W[8GQJ~d4M_^$5dE L~vrk.̩d! LcF[a nj"] ;%Lim'v2@^嚢c,࿦l:{6x;V⏷3 Rrc50Vq~xu_<1o&o r<P: C`s W ՠX0}N#p$9S>"`"B_ <Sܢ?ooy:8:?p]v`oVSo4@NՔ[`9/~67+NZZsڢ{~H!U|jSpPDqx)/=>p9ȧdH)Fyiǂ5%^+$E;_CR LK2z|rxT2URI0siCbΐV.,a#pKTn.-a#R`$i"Uj3'糱9ʹ~zᐐ%m|YQPt}Y꫒{qs RU QH :APTH'7~ɼk hPë՚K2*εV؜ai2voSY]C"droVacKA'o>jhsfI7i7xw셫gg3T5V$R"Uv@(i;;+BkeVrX?[M VqOx, #O _{>|A щwҐRՆجd[ҀK8qJ~_Xѭd cINY=]zĉfxV n;|`6_g <"v43"NwJK{(e,TL$Cm6_l?$ znɗc3#^Nk#@dprK=_b>UFH=^O p`/t(Ke[ zXL S2Kj.*%GkKܘpU./$\up (a WU .f+`VO X=xlv(j7~®-ZFfb nG шݮJwHjU9PH q[Z` piDJpjW+#F$\+]laZ\5e+Gb\[Ցd]jYȰb~lq+;m+NSauh$N(0 }I1'ï>|ixaRpEWvV4u1Up!a釱9y;Sh廂Ȥ:2//Ǜ8}yG fݡ=_ƖJE~;׈t~By=Uv@S%b' ӖMi]&23?Əl_d{~v)OSKzm[pI˻wKiL1K:r=_VlR是jo@!(,B{4v̨~قl`(GoMή|>K^ݍ+bfWĞB^&>!w qص$npAII`s2 ~J$OLCs$irNEUcᓚN%Cʀ3R]d@2+nVQK\97k®(UOw ϣ։tg).r>la`3'}{o\xfxp7[cx d.qX&woˣ9x#PuDinK}lx*ǣNU(L;$^&'ei&y,S,:Tq\q`:kTnٛnvC^9:T.R\_sz* R{q\τ& \jwuasfOSYy\n[?ڼ"WkW'l\B>J3V/Y( 2|Gw^9kep7]:I(ν׸U9˵7aڄϛ'Zq\Z d,rbŋK;f3B%uQ 8$۬eEn?nm*͍͎ݮ:lƳ_/}O6H3y03JUt,Sa^bpQuKYp,SQ3!Y!O,r2>>O6 M e;ohۦ^Qv.ZS`mSQI m[C=YZbV_ܛ\Ѿןp"'mxbրNF„3VlUQwtn 9,:NRaTSjDGD3%_lS*prxeM!x ?@BZ=b\+v먐BXNl\ɖ1㤹z۵ޕ"7k\h T8li"MjSˢ';x*DCE u{.ˁKt,F].ҶrsNS>;ckitVflCa rİA>5m ;_,~ c~2uي>{to+ 3qY-ofAE~s?BݯT ~E_hs%f2k ߛ3 NM~@y=>au-s/b\ JBcB}9\AnKX..R#@Ob"aJbڳfXJ.ޟfl z,{UjQ NuZ + <̗#:'}[?|D)/!=x~ =>g#uڝs/J#%!(2ԅࡘ8RqYlб ۦc(6L4rf8}9m_*pHW*Qb’7  H=*t? \lBq ]kbo |rЊogTlrTb0t4fEkK y=?̦`w I@Ȝ#UZ~)r\&C\iOD% @/)D9+О@)Sd*oN_bBRˁJ`p63]f4 Z9-Gy|0^C'_Lk9Wήlɬ }`AaQHǁGScbtBʫOixᧀyCsq:xcMBS1_#O =D,"&R7d8]q3[}'ko/׋78|͸Z\%d27& +^R AP8׽9ckh4|^95'FԺʮ:ʮFq;#w}5\V{:a0^ p3': PD9'p P0@9ƾiD$b>gGBQK)YxQއI8 FIɲhoa9y,z>Oa۱+9(#SC¾5RQ^z>Aģ:`aD8N,3sAJrAʶ\ =lǫI{EQ]olf"P@,)8*qFL3WY`ceUї'0OLoy7oCa<5lcOۼVB賹cԬU35pL>"ZhBxF9cK R>朶w5QFܮNM(\=@; ($fkjm\mLbB9*SA"Yw U=Ya6%ۛqo=#{uV&bTit:$D)H Y[Lޞ"hj= 56mW[.[)[e%5jѭFt&t;ġĉ @6/s'Tpl{EKr<> 66m-g=aw? :pxTDm\6Totc&'S˚=n}L*X9C ΆG$ᅾx.mސ==~zY XdX0ӧ% +?~Z}ֿV.Y(oR_T>y7n)nN?z]s7WXۣG*}ز۽Ra0^GRI|3!EJ(QŖ8k 2_~ t[%GMmǗs}GiNTkk'Ѫ-=TQ}׀ t5[zr/ݖOIu2f:3N8-q69@,#6WpbZ2jq`5LHIp$" BDD|b1%bJA8'2 @6Ϝ L 2JDJ*9oHN=829*u{xFuf TdDBL9sKDUEE;Z#Nnh ϴDžPm8a[2-w^fe<|oe/k8ܛg6H3*|cZfƯf &}jѹ9eH֞0>-%>~r6';( /6~~> OUΡIjTD(3*0Q Z>܌ Pf{ph 7T+G5-)i7_VJUGVlм- _БpUFjRO 0(0']S 9)(MU]O+M)k@c[T38$oś4#F(.vw[6[S,>S2q^-?ir%HÚ,HKBX<.G uFd\B]!1ΩBƌGȵ.*W2 aV,7TK/࣐gIRWBrk!5APIT ˜r9AT$t W,)͕!s\fioBH|hevvW-'HPqhdunHw=P?~(+Ļ&F(+5;WOit@$bLnQ:ran=6CkԫP ک^iNi⍄9z;%Oo7MU.]q5"]i8@\~*pYe|S֘P^XoWS0$0TV/Z&peVB+K(2yxIY5#÷)|X۽q k7}v㫟zU2;hѱӽ{*Q[x\(nO)/ip:38@Lڷ)AXz^lΈaZ SQsiQե,*}:oGz̾$x };ڮY> YZ (# T}-8f[(eZ+OXܻ44)$wm>k%WYt|xI8v&+YZ:CTjo},Wź"-Ihdt~ .- /~nv?}4LNn'J\yRhy-op>DA+^T*/E5,fԇŊ{VTO~Yru%9rM)5WKA-1:ҭGLrZ9nmpȑhOU25kI7US5tKA뤽t ӧ+n nr,S=F}wڑ̾m,,t?#ua{ʂIfV&@&$6Wy ; $]z 'bH/)$ܧf}Ndi>,d<\f<%At="@Gepf@+ĐIX)KB(c3IEar)RdžWj̸ ze3mUeeeg!T&U%=f:[=9:ڕOήW+VQ b)9OsOn;^#t4="vX^p\g@ⶊ_#jT{b(nTRLWdVߎj e[1V6}pMᐲBSl}8;Q(F`K.myb@xXBHʕ[pݎhj:I~(]"i{-&TpaɌ%4>,sf$4Θ SJq3|M*U#2FB,Grfhj7Hhf-(ub9`3~lh6իIIo~٩W܎I>+g A,c0ӞT3vmRVۻm>kQ͞w#"'&a ::q5x3#참RsC3Vc5 TcG is5NÅyǨ,HVGnw/UK2OAvђ1NF@ʥ8Ψl}`&ZK>S)Gsx<iSs6dQbp @ǗQ_!h4hi#aI//2HOYupTNoNuݵME8PGYP[%Ni1xhws: (GY@aT " #rθl$SdP*)DQI%'(-Um2t~1(Ih8ƒGCdPG7h3̹cJ Ge(7=LÙЃ3 b4b@8Y7G@4v)9inJ1ܫx< VpB+VǺŊIe_eqU﮼ECkr7}.],m$&.?Olo.7W)Zb)ҿC I 'ںH( Tr=n؞:FU̿&˫qQ2{Re7o~rߒI!}[lquog_,}@? /{=ȋEz7[i͵g(4|fA96A+G5z>P%|ϑM~b&h ˷!;aYsnS奿\.?O.7jhcr}5g7j[ uo.NcZ0#>WI!i*)oGa&g)*1\ǭ3eBj5l-)nq9&q`P_EaP_Eՠzsq+rljLWE @IrF>Zܦ(Iܦ17`"ە1n5$-3=CpcŠEAiܧ2=&c)" `f!؜KoC% 8p:\[ҡ.ԜpQ5 ॢ!@N `Do[(![|ԛ6e;jQe(KQPPRRJpVˉ4DI3 /-:cXh:2ɜ#3Be9!(I AqQ,ma!իTfR ܰf>`a=D41ť%bW?{Wƍ0_4|`Xx&Aٳa;/Y lGiN/N|+4RKu"Jc[bS,VERJ̵/q6eˀWpYhY`t!3Jc>nP0W<6"_A̝C:ѹx~u헇H׷M?&W{bLTف)"~~R-N{rU, x'Ap?;5=z||s!t6_qn1Ax c<<Iόlp<=fb<^<<Q-<{;>cV JA|[*Uj\cVR>bWy:Dp>C_ްbZ qtep̩׾u#M3K \?^0l.} >Z$lp̩T2f|nfV%EIsY$KHٕVZ9A^eb)t_Ò3Rw*eh@9+Kj` 1e甞tfH wv^A}9åQJGgF/pUk`jKV'*tkJQgj35ꈅPDSqR+:>q$Np){ZR~:@S{pVhE9[]kdIqGɁ#: ȭ麎|A&^RvE9鞈w?WK n8*2n{x: Oa0AwLX,{SOp}ĴIfXb9" vVqv{ʺ|jOLA_kI9dNjD$bNԩ$4bkM a&ղ6R9I:ȡg8?(kJf>CF11L)C,9$1~XPfVD5Y<Է+@F(5t M"K9 7.(8Y瓀ds|c<M'^Pkud] 'iAǔû"#<R90EP(.J.3,ئ[6ݤB+ēTbG;c83m kH2iMZ@J366[ne&v?`ldZ6/ZĜ%m7-qvQ5@jEkVO&T g<7iW7&_|)١-6u_}-jIBTE5Rm< Ե-~2j9 G#J+E=x{ZպU `'}& YlbQDs&9 4XcȂ:aAz~t}iVk ,)6碅w.Zy8DPxאַ៥=LxߌxMϏ߽[KK*l揷Wןz4Wy׫wZ>]zT8mbm:#6*ISQ̑RޝCҺp 'c̭5$8.eVu2*gQJHl]r  [ELN -2y/=|D3.M8+0mn=ٴdi>* #$#WX%q̔JE_<2|:jFVOcY9lPo2pbװ|ԇGϘQw}*P) qqNĜ6-gܸ ![Jٻ2|pN&[XpX%@D\K6[X6%DO ؝m첫)5(Ԟrt* e599\ړ/s +|qIioc.X%&D&2Ƨc)Y(# K 7xo$䘖 R&Kx]w5]c+L'rV)0KDZ8̩$"\9m1T9v"|&Y2_~tzu9@JJJ0HnDj]#! Dbs*!B)Q p&=P9Nh~ gd 90y{oԅfW3EnMЭ$ow3L6e2Ct8DF(!44Aq`GcB6TG\G:6cs(" ClQ+cbɅGzP„4U(šp $=Jҟ؂)gV[+cR;J$^]R>Rr,w3xh\b~tsLכkЍ\uz_ WP-f.JJCjl<̧^ŋeTYaiۗoҜ+{rOE\%o@[7O䚧gGݙ=z||s>t6_w|мgFo=~9l ,Uodf'xzZUp5xW-+t/cʷ+:O (% ^ )mS\pޖu=[ csxt<>W)<;#|m(fŜ:9D <`@%tT%e~';`\rVJ@<[ PE0޵s ²I 0D\ʼn%cbm`R 2gDU"B$VdS55L&\V`((2JbMJNtz0{M|Yō,ƏΌ握 ^Ez"qd\eNް#*!%+Oeɣ* Ҟ8HRNж.NI0wgMx2VG&(Q6 FоVqa dShi_."xgMUH FF1_k10z "Ũrq,Z6][B`Z} 6B31BETFXqbNCcFD-rEML"vlZl "0Vh 00N=wD_@{z~X U685CR`=ݸ1<*HK=]Hap o&EAd,pDk0! -40w&K?|7:+ӟA)}azLvy5^ ^]_ŞgZh]` Objϭ$C}P\eTv2Fo'm#\rſ Pcf>|b) qʐsԜWO.rnBIby\SDZ G(k*NkmD~u顭;|@BTm;n8?u?܍.n|݇ggܤn ϒr~w_zn4/?*h_zˇƿJODfi:|BKN`1F^FqFC-V!['3W׸\Q/1O=퓸/C;R'A:xMHۼmUQ~wny|TkqooN/sjw_šnzsȾ3( ńwBpQHk!Oj۟(rʶ?x49|_ dn:{9. Hjfyb17I$Ne0B,<9zwr͋Dq h$\E69KOwbU5* LT2AD} :=)5zOCgG1/3HO6!)\lAwZJʱFd NTPio:@D ̙3]/DVc+$W ʫ,k!PTthP,t~sDZ9sJ3\`%;$Pv8QC!7dգ+Ceo;OqrRd ЈJ;BKCc*2w(C[RQg:;#jot-J,O5lTL@곲7( NH(YE$B &#SJֵ:Ah:` T&& H$qRЙuNWgSJ֜q3} e(ϗ#@e#ĈR3h&V%S%$qn;3/*QR)K RQ,ІI2`QB[]Q1AEB$oZ0ܞ[lfšH^H( BOWY+3صzS@B -s`  e"LT6g&T#6w͙A~K$ܞ(=Ntgteٯ`8ץ9]:24{6pW:%ƋG]۬^co cY'2FF\Kb7ΒdG}mqcopo߼UD_Ȭg݇N_G;ӛ_>$.T[,фЊ%{v[|t5ۛ&).Z]9aO)# Lz3EwDMz3Ewvd3)W > wA9; xnDaVqL :diS:O 4Q|%޻M"i"ӭ7>qX`Qǖ3߉uGOakK8x0˼ьR%DW[wNn}juu{GhL~dS9ɣɀxzj ied/3r9i2{=a:5\Jw41Vrp1ERZ̅Dʋ 3/ҫzI~Gϐ~kktC%C:I|@ fhapz\g<*yWw1igJ_$lU %ZϱTIgKmh_?ޤǧ$O_~闑~]~^oURo&EY:e1!c81b3RhJixlt }ƩK-}Ы6A`텏֑}wg:ѬeuJcSN%$:rj m2[VBz bVY^#@x\PϭfAtE]^LbP?@{PvjJ}4)JC ,$:n+'#1 /Y8 s64hy;?ѹYBOu0Z_-BY/ 'NOEi kҔwRy3N8|FMC0>4dvʟM,+ fYsTj=+&fY3DZ!4Y/L&}s12^ۛd&msOx79k=6|<x\Z324 .Nё7 =ތȁdPBGz^*-eL햒- EVn)#jon)9ZdGM;0"'lRD1(*TN`b (01r+Atd ^31! DQH>.K!䵘K(Zάѱ\,Byb -%X3;yYr6Lv7Z{jO% %uizI(!:8N@q;ʊjh8xo(DZv`+PӈKѓ㷽/ re?lۿS#l v50LFK il RaXHc%!jd+mMxSZ1!5dj0G]f)t!1Qn&seEq }CCAQIO5뭵n־`N1`~RN婚O#JAmd% kBŧ &_AO(uZ* N;.(Vh jehm+9'Nr&Fƙbh_FS:jiqA0 1\NaM".2$ETiV"b(C Yl 1=-5K] ٩x*53Z;w.GJ\M&w)Ǟ*Q$DC9݇2MٰTӉT3L>ţ1L<[RzlH/m$Bxgkdn[ʓc*;G6õۭ&*zKh+õyW|f867G}1Ko1Ja}@šh.ӿ,%&"͛1m¶Q犷$q}wv*6X;⧚2֋;`_"M$=xƟCOQ1SҸ5H4%6.yJ{$kZv XN7~z1}%i3 !FjȪ>~rVòUYs|~3O6iģhUK $l'b3^;&KIC xNqʂ`N<f_> K\PKRƳZRpPK֭9nk57sփZl >5/$o :  9Pƕ|j ^2I*>mSc?^c!:00w^7 ,6~hx>ZlҦ+?6_}w>~~H{.csӞ˘\vӞ1FXJsEVD Ϙmw,BԴA!"TJaTP/~y?}>Z^.o7PzTClh@~4_7\CZ[\+M&&^DiB:0y MD'MR^Sk[/)ۥZ<'+MLR+LEFPj4;R SAU̾HTb=j! P="O`2 w@{EU.&,Zܚ 2+!)RF` ijhAi7bK,G!DXNɞXG)Wy7#uZ` |ʇmboa Fw͍Hn+G@O?L.JiȲW'3uJ)oJ&E6hv7 G74sTyVݞDNȝno#}@*qE!bP{ZuU uU_?^lzVzs^[ ^ഓgF?jiv pEwaVWwh߃BPQrg ;lF*{0$a1jYcK!ȐkJw QŃBEAaF]p#tPlUt⨸{Hư5V@f=*z_'V* %^KK?{rd}Z:=kހ}V$Vw]d6jũᣯ+YGdF}T̀%Fs|i[fzKcV14жay[3XAL$lx aP ya%n9Z~$!ÀSi2Zh:Lі{ #UCu.s@,mSuu̱ūtBƟAY5%jzo4z @mٻp]ŭӺfŇjIp^3 %[<( p j1*(Q\#-Y8黶5p( Eq9"r,oQxw5/T7vpzY476~2{X?/ d NRu8PH\|U SnCH%Sjy @_3$,FTHl%RZ =cEo3dŜ)ǁ)cق7te\qe\~^re.we637tpB3 y5@G}]kípXєèyw<;[ShL)BqclFgDVxW^oMay?@x0+nW ųk?RfCp l5ߕۤ-c.'<!#9T&q$yF36PBaD$VC9P+Gʟԡgg"rPE" 81b, HHKp mnv )T>|t{wkmrl7(ᨬ)@柋"5'??MͥGş ʿƋѮyXlfVf9@%!/o~\+Iy // ,?9ƪJg(;;N#|@JQ"3?o.mbY ÞZ\PgI|ƩuM&E/߾u_Oџfw:.w=-+ : =ۖ'شFˌ,48,&Rs\X( 됋`^^r EV:|*M )QZiBdd4P=́%ٞ3f̌tJ^H4a3%+)> geoɣʼ޻/zT)@i{oM/U!m_d6lQ6P>ʻU;3Lgv9yj|p3 e5 !W`CB,?d@9?S @sD]U};q,&Jv"pt3k%Hg{#r8r+krh۾v14˅7?_[8_]9dNm~\z%FCgU~?Uz̪k;JcDhEJj va ]S%bX^.>$oI뇏?}<[!Gr(1Shr_N{RJ?&;΍1!}J #*i1F 8Xg='vXοGEl}:Dﮯsd"rIYJs,,ѐkV͇f$ rDA21ܨb?!QW .%҇WA Hxř~VQ3zޭd˨ 58Y00^1`~LpM.J8.+9n8SBB!f]gˆ޻aRt@,Ɲ>Ci|d]pAzbkCˣ9G#n? Oy@Y@X*vlCV.dpP:#tт ZQCr4<'9~6 ݩa(+0RAhI.1A9MrtfDJS%8:6K-İKp6޿kv!HK]f( oP\׹+@%=~%Ne c! gJT]4i,4` BH$ZϪQ@M(xKkHp@BO0S{zLbF%1NjYPڸLhHQX!x,^@P_w_7 e"|Utrza- ylfI>\S#̸"5>2^b9r*i0Wu_\^>m1{c\ƨ.pd5{VSzYruƧhМ*nԯG }vϠLU51C S@oqi7\>ܣv e>GENYw[@][3OsJ)jz#tR2CQlQO\&X"B2ku~|p?-Y!p *(Y)m J(NPAI))V~eV##Tfa D&[V dLM[-3Z&0%b{*AfTlic0'bʡ9 Y%A43A 8I\Va٭y9QJHۻc+."‚-G48QsQ5 7]yXbYt_+).O,J"qHrop^ :&}ԜFr^'Ӈ[A'>V|D(.=_FKO+t"sT 'J,w驲~ ݲ?wy1"K>-N?-ts,Y&#F<E,C|"TQֺuyY( B)3*$'V)0*/5K?1b>v5H%Gv{:$fjhۥ4hqPH FvKz!\Er0k- g\tyKCs7=J -BƬ7GW߲P|(vo/01B 7輙ÃQLL =Y HAD!6Q0F(m;.iFl9fpAycJs: #Plx LrCu w>(^/lSۖ9Oǻvh /lck'}NoꪥSY?"qPy7 ]q^z5y~0WF,< <(h=h,~/fŚohrs݇o`Ћ՟;c3_83]Cn9*? $}|w{=Zo7zcUJn<5C}A~0")>QB#P`.qܣX&הIΏ@O#<ZzF$+Xk.u"塂bUtQOzuMas}qMI!d❇Ekx |#ɧ`ƋO(-v\'~_/?j$3 }"f!k6H+EbqN+.=SX#>{e/P %ͽhFtȧj0%ڜd6LJ{ Vvս?GO=ӻ&ў;(`KQ&m0@+#S uQv)&TݬIx0f}:ZCJCUF԰t( 4h:z51A] r@σOŝ߁r;@KOz"'N@M %J;p 9)I .ؓi˭/J7Zfcd(aoXFPpQqQqD]!|on: ;vyfBhk X+GH <`DXrPYIiMyߋf4^Y|)`6@[yu?QLHO%|SX~G5FCX8//(B7?>]UMPJR:8rX4~*>xbyȸI8JV />\ʎ]OHdRo4$B۸;6kjTW̞Fg&V [˃˄-{nE sC.)%*\o z4S?溲/}5',l)uaPj>{dz˂V1O"S/js\Șxzӊ4=khy{*an|-?ku{<?Y9>Hͭ&|~s.V%>,G%#ѝ sw%&mCJkΊCOݼz{m1$Js߶ټv f!%fbݳb.>D 0 9{knqAKGtZ;>W ;&S yt*1rY|~lNmqi7ΗNB'R CtNlLdGU{큣,%t|`-Pq!2ewl˄ $w<`&ɔR*ckATH7r ГWx7rywvGx#GS0Gu&*^na$AwL~IHd#%6[O58 L6ǫW] }/[317E 3N,8O6qJ;xUXJ oc˫ п/>G7UԩDz$] Tl)JӧTրEpɞ:srCIoBKbA9- T M_[ OgܠUpRxlMs%p1 hqj448*5ڿ\lZ*/ߔS!C 8nJ2@2+"_{q6Mm I_eMkIۧwC}g&hN@pӮ%xf'O5T\QXJ"(r}86/2xeBxqLY.Tc=&x?/ J^ɺKÀ o"H͎a+ߘ{zoxAR߀zҪ-u ;p@"R+ߤgzd;-=|Uz"4og^F8e~6wMoZZF0ngz┚8ϑ+EӀ?ֵ5 /fM[OP'Q0}ŮUQ}ፖvI#a)$-DZ, 44牆$"JXlDR92)-rDbNWPE8toyכ>~OB~.803,>]0^JRպуO峐XZX*)J X )2; DyC@m5CEb[dD*: ֆ3@RcNDfbD?(AT!Kz. Ț6"yKdPڷ)7<8NG'F!<ǠQph EaI)S;C[>ֲtZR)FȵC1&Z6ʳ?DD2I9X(9ORyͪc9~(LL1xqw13XO-'7e|Ph<(U7q݌h]#x0Usӈ4muuJyRHu{OSŋ+c:8ĜZ Vhp ca4R:W|5EǃNtt6=,BV ne2=(e@CħjwCMkZi{X X?pSo; '4c)<Վ P?y,T}2ռ=R$'IE+ *#Vʚ*I;Tu@2#IXo*1Biǎ.7CװZ8d9U¼|'.O䣲DbG 8F|h"y$;Ef`DwAY (.՜xdxp)"EŽSfkR@8:`_ l%؜ N `1: _Rs`M<1(Mh lJ#0VRB %Cb&QT=-?Ϝ 1D.%G8HQ{bh<`Ӂd~Wvw`(6s%ցfWz%~} 3BFy0>L-]0JDyM* 8hOjܸxq|1xr>puAue&J faB>e=N^!q:7⮾W }5B(Gr)ջO#Ҙ֐Oo3;j[4yր׽tImu bt*&ecbա-$ZCNi to#?Er 9q8^i"wH јh,<bR.Q!cU^V)"4C1) bL(ap\"*0DAYi#i2ɈD3؈6|X `4spgGg8ޔW ~"\R0,CBakÌ2hGL_i%F< xCKvJٜ ܧA; J.-]B2B%)xX}1pQ ΒU"E`sT#o>AOn9\.˻2q13"et_1Vxpzs5Su.kd|KP] "b|1`G/^A拵]RVIp&۽H1`8ǵ~B{lT@8jN1NUir+kx-'h*9 @Na4X{+V=Ur*\ețV\eէT-t)V̹ZR-P)GㄷJb]YoG+ f"> ]a4 ,dw)K^od;2h @bWq }Gy"S쌁j{dxsd E4H=QGtg<2kk׃WMzXs]6V=ݹ|&#V٘?GRѝҜ#P JE_x ȏZܠKo2_ş[8Y\fsUA$Ppt,\-Zغ=m;kFG1`fl A2D@ `3O5qjW>{eXK&ywW5W p'Wj3;(v;ga2b958J5 φM cd[M jPpXبJLut8-;:^[-;h3{S{qZ]:Zń2I{hn$ ڒ£Of%|ѺZHg` vۺBEM06GBv'w10D+M-!Bc$DL˿?uH/^21(3fr#25ڠHL!Ӕ J %a+ZU@5w7pr};e7hl_e SQbYA 85c%sq.Q"0 Xu Ê.r%?p(܄&"Dy 屡`kT"NIeҌ(ٌ )ATGEXKWzZNMPE[\7JbXFp֢iP b䖈XS͜D4QѦ *w)x9ĤBp3߳7|"\%u9`insZIS2嶓IE1 VrqZ,l#R;kѱde+Yd Un ڕ6Qvou)1>Ңlݛ7aj&:~G*sbxpoZ g;ڀ1b}[] zƝn*vc1y;0jqvzZ8."fL4cG`RaK[BpwØyLi}pu\>]NSPۧH8aɠbcΆqI1jơ,E /\^YW%EU E/qQmN@FPĹ ?PT <6WG:a߾.48UۡQŀ|> p#:/jM^'4| ^nPAdT9 nQx'O{M _w r;fouUX>xy=Ius p*Ieܻzs&#Vus nr ZGV?>8$,7كxvMKbv=V9"Ig =ݩjkUKSށ֩GkkŃV1 1U*<*9,W7WNtq V 1<(;i8.u^Qѱ1$;hԥKuKC n)ѹAt~2i.w+߭v~ksUdeA&L48IvBLY }MOyOذ,X)p JyT%d ycA ,rHxq; Ҥ?V>su*/x_bTأ$fz`q˚Z!.4:un.E;pa9qzu͟hΝX4nYl4:EI_F 3 ӭqn @ __i\rC%$fkq)>;KlJA~gQ6p;¨яkEd>l?k q 9S\!4`$^WX!'(cXU#?Oܧ]{Jz<$ˤ5hmɗӐgбìv>w ޠsOӺӃĴ͚ߍGß?h} ~\Q2!"- P%'*-'Nˢ GO_x5\xk[ӗWw_%zhzvRNUŭW-մ5?fkڪ;a2iS }(F(쐉lj&~pح"Bӵ; 8 cbjhHJ)*.BFd4*8WmS4+QezvQoT>315+}A> y+1G@ =qk\k?`#2a >a1rk08L؂\l&O HZq#2ft2 Yi&`,)d`9SYk͒ V!TK/4UY(jGRM^Jr'fZH@KAJ)0ܖQ(U(L` u.$2A'\o=]#$,ܒ_5𸙾ESM-%,,BİzZ(hi xr=:rcPRN}V;+ro 2[1G9CZvQ;zaL}!PYw16@=oXkܩxz<_`%N~Vβ:67U4@tZqZd37.Ry֧tS\i=ѮrF۽nN@v"}[ĢE1!:k{CLmGs)Ӵ36ogDa pfN̓ K AhrNeG\cJTX/npV 2pmEv~g/4ܸ }/OzY~z!e~6:7϶-S.Zv&{ $wZXpTsNc @52B&nCgbmhaUiXOP)ljbWWEn/moa&&3_ N ᮘk \8?cjt};Ml@uwT}k?[t?)ᙲ TѹMTuk|P:E5S*):ڭ G;^EbHW.A2nT3c41vsQ/ 5?c'(,.-0ɕ}RxF?!NPL^4rG?!NZOD'| ~Gҩ?cD'~B̝KtO10 1vt;\"0 1wA^@#0 Qw;2bOXӆI K$)<2/ Q4ZilBkP,qJ3|u?L\Xo>֠)c}1LmA<̟JS c,?`Y`m6҄WH m%Nc86L <B:2 RθH"=AT9,q@C%h::rnM,8p Iui<@Y=Xo]m'y)378ZPjs/";ڻd\nx|pFEݩ't}Sf7wSQSb%OJiQxFJ\^'X<-pdRHt)˄$Jg(,H9Yqz>cD)yi'BdO4/$eTbm,b,{r>c ؋/iRZ%)MHap|B╦Z:`FV f(O[;H9adX܍~j2fHssvlu;-Γe{ Lj1WMxXaxXl^77kJjuskһMM(d/&Olrf`T=|;ss&2ŗW 3i[M{YSOm.ʫ'Obs묔!"OXA@"8[r;8O ;ɏˬ\3W]1/bbcOaF o<i)T7@(.0IDDJ3;[曆2MS"8q |IXD,JaƜ!fac_ @iw)^ȝ!R.w- 9"%l~SI)2<ǥ\`T\34j\sdT (;Q3c$0aD#4),ee&vv"U k,MFd6}+ p,28SAsR;d69N v謇-X^PۚI;T!۫i3 +g6&}L3JP.A ªKplRP~(pcsj"j_aOT5bqP:I"FIȊ-/anlSIS[DJlJ2S(cR\Ö/.rT#C s ɱK6 Bv@dBPܨPHR]0P29p̩Ӊ)TP|i!wDhAogiILT_0_<9^T=ݧ3!ϯ;A}>  лw\.K|; r!'t3W{ w Yʙ<[_ۤ7gj ^naW{gOB_O~vI8@Tô){'X(r"x'Փ"{/W^糴Day[I@|k1vt} 9I{h.e=eqt~03*Rv7G;W螣 3wBSի֚p׀S&8z0]QD׃ytRȝ`2_Fks9䡇,(/O(R8L UA6E·!mb.)2ܻZ yPF>ITYi*,O׫̆[y˦$A\+>t  >d"ktHb],M>Nn< ssw>d 3 拍Ҽ[^_OCDŽk)ۃByAK1\Mc7 = Tϐ ):V"\JQ>kτC]8x=E"5 0+\(mժ) orE[1FjցWeJ^; ˈ‰j'Q#$4 eʒ?eoDebJ)N<NS88'Q*KZ0c#_j0.^JyZDY#1d>Ćv #ʥ!hRgPH FTzEݧvW*ps.i6|ݎEb_ C]ŭ_,wrAHb+aFBPީMAӛSqTwñbtFBJwYlT k0eQ3{sĂG[@)4HN^NKjj\CIpxupډiEAԁBgH]n&%QR己VMIs5Qrt.3Õp%.Y1ļ|&aP÷THQw?:"YTbw17Iޜ. vOBw.#!TC9@떤I+>VYBږ:TYR!Awד;bJ*FwB@hֶ[] H;z]T 'ӑ~ ,VY%K֕ =d)]W?r;ᘁt$7J DGVFn3'T*܉p_3g;":.@YF >qh#[by."݃ Z  WbLf-nuNfql۫}# fHF@Gj3߸ՓpCڴrK~9ǠV^!6󔸈<aӢ)gύ$M&(/`a(RJTV4\PU䳬%0i*G(Q 2ˬ fH98MAs(s&Go; $3'(߷sF뾮GrAÉ>!hCk. e$cئRZ!9!_ʸQpL}2\ޜ^<ӶE=P@i*-rhd2TꃾNLc=acFݥrJ Wؒ1.ݪM23N* O]|z`eF~/ ~%~jb'#>*$]DwO۴f' h橤 Дj'j_ 7gN)JS$ΌP4# 3r6áMU_t6ג/Ac%tWoAo;^dy0I+|ohz*ྯʻ~h-CLP.҄hÍqdI4%*ф#RD(i~T}Q%-P*ۋ[JHm8mwrp1h$(ѣ1x]P*jW!Tʇj1Hă*LxשWXCV#$S/ 41dXD/4|K?x61:|=',y^/?}wьMW:| Em~sۇ-~rK>*gΨcIؔa<.9U9Lt&Ҕ13 JCxP-gi1b`Kxf"2 Rk8Y~ziyE9t1jgy )TEEY3*p}6T['rW%b`ODχ3oIڮaS"eFHZ8t3?qG̦i"U kЎk4\6h#TW]*ȡ(H)[!M0M GHZh'$ ڤ0|X[_ @ۚ9PN(sw6jyhˣOu f=9F -I5goQp y˔R DLr-Yu>\ Lv,)1+Q "ht pn5BӞi'NnX$r ;{{z^堇녛Xhع2P!G)rZ֟aYCA8ՠwߌ1,TEtkf$աGUvRqڝ ;Ls?p S@ %(?84$N`(Tw?.])8uiKpŴ)bץ{ 5Dm뒌%9fƘ¯Vˮ$(:Y"eWpݓb+/H؀Ò..NU⑩J?,`}\y5mD7›11 +4 ޔ-h??FP *A_̈́dUh\Gº^hc8i.W8n59ib^7gLUnSnLo?5v ,5~S "]_$z-4gtszW.nhl'?z;cZ\ڪxD(pݚǨdl׸~Nߝxzi2]~G^/Jl~wDl=}hH5MGx[/h ٸc['X|@^F9am%}Ϟl>) 7EN nB)&h"q3Ӗ{yl1@v #U=Zj7 E/3귊RWDSV֖j*pb^(Dz_(hBG+s _=A@L"BR^o12T*az΄RBT*&a–2H)SJs iBN2gͤ*2p z*^ga]tXWAܖܡ\Wy)K‰ڸԠ ~RW6 KmB;8/ RJ4/~NDsVˍYU;o)x^\|h*W_P퇅-0<9XTq~䲮Suo\Iy.b>|ެ,zvNΓ8'59I4W$c*u.ݽYƫDtb_G+ξRHƪ=/2!M/CR'8fy52_# ʧkgm8GAzM? ?{ܸ/[]zpLNe&R'I/ٚ"@=Hr9Ӡd( HQe n4}MLR^efqt5/Zo4[_Ͼߖn. lV\jCpM)ޒnBa=閊A 뤾t;s?дZhnmp΢I<%/$xJT bX'M.M+q]K H|,Sb<JtKiYizJ4(?,]ss02Mk!_8Sj=Tk$c)tr_"j** Bd`1L9òB[P'𼮲J3Y3\Y {= ͤ'v<>*;q^D?E=%΁ %RGyPMV8) u}{-Ғ}K;,E:h( ݀`FP_*a%;Br.!f`?QGU ~?>_x a;}d𒪎x)בVu%IN:)/ǝcRIn59e/^"m~.sj_Xd 7Rd/m_%W Mk¯U}M(qVbW oTzVEusq(kE5BC>e9͵e9ǔ:o7v5@SDVɠ=+PE;ྦྷd{`僣Дac>xs|nl^-_*KTAWwJS+VMsnV^#p80tW1݇DZgO.q2n bnyx\9~&3ƯG?bߍqDS/p Y-%M6nR@f"=Xn΃#͊5ay6c!Y9ZUvܭf8*J(:[9 *gcuNH(,V1F2Q8Eޒ$9q!ܓ"c$-&)BK'l1rk_:7 8o+D-U<$:A+-2`X |WxB(r,9v g |HᤑD@+PUrg2s4Sn9`'Ґ㲾^gd R7WJxq,ܫU!h-<}?RӇ4|p8LHYa˧G˜k@{x$=\c(U߿%oܼg<`O^K) L.%|zx # Hxꚇ& jT]KxkHE8]cQR!ꠠƧ&{|޺hv)㺥p(ц[Pz4[U?+Ul`nKwUj6q6Xw)"\o!dWYhЧ\QHV\|1/yY#JN.j\;R+5NP͹K] 4Xy)M|tq m_H}Fv(Šgwc^~̌͝w&C`?PepL2WP.EJ dA),ԺP1b|%A^?_ކie<- Qk._UfqxXm|-Ҕtsw1-n? 5e&ƚ[ja9Bc3+$2򜢂(66(??HSy4RY%$HʩR ̾6.@l\ByVKowjc|'&`A% PB %ug/q4Ԩo4h^J^A5Y:hqtrGɯfKƯ \׺~|?OzIK*y^!"L;dӹJ4]"ݟ[mpD;|b Y.PkMnxGgِ@K#qu=as[}t;~XnW-qS~ &[gN?;맱/Kѐ ψywB̡MI#*x7VϜhM-!btaCAqBbmhZx|?G?'Q8 %8`i,uНҿY&qUI<_uDk& V=}FVن4yu eЮ=*y ~ /[&k`LCu`ƌh#ᘮe4?/b]PQOh=wŪE=Zn` dxsI9ݚ+O* n]cg4|ʰ*&jjQLy@RJ ý_UWY4+F_{rq6>ù8EhRa*e(g5Y\ëz ڽe^@iidV+}sJP8Ʌfɺ쓺Z]@edf|S>֍5Fb/G"$ca4"#Q=c`(7s( C./lpW`{ПZcb* ;P0U9}h;BtHw|l Ҡ1J4`\f?PwAGbk}=bFpdo ,9)}ưѰB ;n#/>нfA|IÌ&%h+\ ,P`Q2´L͆7}|% Bxbp;BH \2FWI\|G\z@s. 1 $0pSK ch"LVhZ=Ӝ8+)["Ii<3^ +UNやX4%h\`U `Ti.wV5xyAU9"$4hU5Pki81Hf&])y8ՅrRBvx:FyC>1_qo6c] l>cG4ll6MmZ!"-X| dvmy fB]7V* ;!C.rB08l ct{3ӷ27LzxWo}/KHm@*$/?+$,hBB/yuNYy߀"9!(u#;*uрDz *u'Z.uO-=ؗp~n)͖Nӗ~*E}r N1QZ( c2vL >jD!R~+kT+$'G>@yD~yUW)/@6$bub+e חCT2k__mI/բ{6DR5ebBgO_}Vc+ߓ!ٝri{ ~Z<;O;//qF1W%W%6S"akזqז\"ذqw^=y pGGyj=BIR 4&ҝDx^..1HRguw(hJj%;6*n=;69 qwF09q=Q&PjM5([&N;dd!}QQoA@0fNZkڜ(GL3@9O(=Y|K,_ I@N%"LSH/݂8g]b"?ѣ%aq\bdV m EJzi ; $+8\^YK[ܖT[&,ONG¯P쮑PMӛ LXf'Yf~X*ç+s2?_ެ9rX5Y{/&nV~l>5"T Jihj#v4TL+ ?^Y])2ӕ,X)H[7r`v8 gsd2yK[*1&,a?]-^_tkCpMJ-TJ5!gdRzN Te`g%->WǷn|nʶ OZ#:T!8igL1FH#&TK}na f }qBL 5M)*Oawv)дٽ갌V9 g$Rdܖt#vuB閊A 뤾t;TZWnmp΢<jMOatiٍ>f)r"nH51#hۏ]|+>w.٣;tɞIV*f `8(4ƃy"9ʑG!P/4uF; ~(,dN5uMx9UIy P.3 8 bu=ZMs-[=S9q7z0\Ǣsß[88yALXo rYB8)8?;P6$Xɽrbbr-8yt^(4 oeheAϝ-raƸB8#<Ier r ,' cmZŝ'wn3]YoG+ xE} ],0xdyxMwS Odw>UYU]iEVgE~t_giX2IF1㑀#H FT ǝWMaS {wOJ=UʤJuLVϿYyǨ!rզġjSÚ!+fLj4t"S7ThU !L5bÚWiή|zp&JKM8* r[TK4R rJ>*in7.$O.eJ1*dQ=4 AOM3BgJoQ_p8MX(U=g,lUnHu[ d 6 Ş]{lћj>mGfwƅ3eWO.&Rj2W!?"0Ov'RoggS_oJjuv"\MnbTи~^Z/% Z'x1J=&Y92X"\qFu~k{c1A;8XHpYgK Qʥ (D]#?E'p*E$mpL>F%v^ څ@o4EQY,ݔfЂ4k.2 :^=oǩs#ԓls'+ 8m[CTp"([MS.)Jook2he=}lޟK &෋=.w>}c·`u F6z~r{NMó[_rv|~"x^|BL*hI-KWrm=!9cqzX ztlYN5j{{߾Wo4]#c< =1f^sWzeg=ߢC߄JP2\Cu|zpf&p|2.;S}ʏqGu{D ${q3TX)b%g>,*X(bE$) N2& ٨|$qNTŔT QfY(U- ڗT՗eI "8a"#GudX94J1)ᤢqРH4VZ9ѝ cx' ٓ$6oJgW29J/'Ԍ#멜kB&^HE. ^+bDɄxG b)aZֳYVYRh{4&jB ȃӃ3p7GhEwzxpfHr ~٨EL)w Sb!qpnI.9Y hS.)UƨU톷QN[q~X\pBe,;f!z3k426SIikmCHxf|<&#}7O*ՑG1?x,n=R?L !p-jL*`HHX̃_1ъ:۟ͳ:OvUXg3*6ܳј>P@n@nNsp R1*mRjṶEcHQ<&h\IЌT`$tSv$eo8 }q  &uTwܿCW:05ht{ )lTvAyEG+҈qZ ɴiA5V늋Z}xΔ-݁9LTF(PN鼲oڌڤ 抷5/ lI5=ECk![ .՞jg㈫TZ 9Ȥ^trדU[,L>xӺ\8h<ݖDElb.ڟJ~q?MF Fiw_J?yIȟ\Dd m4$GnN;hy%!=v+ݺ?uݔ 1 v;މ ~p[q6[ցEHjzn]-ڭ)}GvDIZn3['"R-O]ZS)R_KZj }MJ]ȹ{|ݤO`d͹| -0zPɮ $-;$J<~0] =箚H-̧K_p G'o~J3]~7MaKji$=|xQ\3_g3mZ԰m\lq֕4,9;`QJ6yՔneLOk4Nq2l3>~*za♌k,.=Ι~Pea&6![+U/[P~IQ3 )g.&O"O>/sF( O55g3[->&&*DK<ֈ3菨5(Ih)| X o!'3Jr^ҐpdrA} r9tzL`Bx:rqr!`!0'2f6;Yx̧}Z&aj?_==là ֽ>ǧJZ߿%o} [L)1[ K%lss}w~B`=TWc_SSɏzs2|%4?_]]|Sq_GXɕrvqp5n˭Ѐl =5u#$}. ]D"0AA .x QaO@29ӎjxXz={Ii5T*m1ETAp4L@ 7ՒjG2HEudEz;0S3=(.B@ B:1֚8@kPnN81$P(,Ԁ=`32Y@^0a wRS \ȋ ͉ Y/Σ4Aٯ< :ETH}—&T>geSy4!z&Vأض~.5CAkaߝp*IZ UaOE;V:_!XLI?aɃaS ')sU`UQT¹ TN|]L^.:w>IjYHF cs>??¾gމz ۧj{c1A;8e0&v3d`ӄ;H  j)(Հ귛TK~)(闷dZ&Ug>]N6A?Y^gdyyrmEjoL"y5o'&zaGw8M5YM?nfq^m<i@'ˌWE/Ǿz yv3^oyĥ*IflRz귛TKQKOzҗ(,S2zDuL?]2]L !E7j̹ͫ%b>_?#olDkD*ŵ<.NM`HUqDĤG.>_mCŚ5鷫F`3n:/r ,¼ޱbJ[$W4?dOx vW2l7!do Z^4{W؎__Az<I _95tR u6,Kgr7SjY#?xCo:46aFmvN[d1Hma`f wŷ3O /'U]spNUWNXF=ar/ %}2KL<pZ WpEu6뾭=YcpTEb*UfJaX=P ; )|qYRZdQaT#eBjY E#&zdܼTy_բ2JpJU>ӬODIS{qחB|~}@яJ~;~`R Q+Jlqn}z' 5\.]wf bH3<Mm/KIԈoKe g1PmW.!*\|QZ\՜ܒk6؜|]jjp7q龈S3Q胿N>ͱV[+VCj$Pe"RU\"!a#% ZsHVNfXPWewi,hkK!RցBnU(]١NP@VAןw1e8y''Ͷ^D&GR-fέ r /KI_ߖ2_[pKבȾ/Ej͐K__vV} 7+ݮngu./jW3nW1cEH]䐨R/Y :=p20`BS\]AO1 (ĘbPħcԞKLMGL[}zWj3~:3$XĀ~Bc*MCY>膍FiP`󡠭x%\GctSkVQ̢~wב)}6k0C3i۵ڏ}|xpj//; ^um*es9@ ϡTUۛ_}łzq=.Tt ލW;A!< ğޜGDlbeB M=UTA#DeUC9]Q{1]Zx wS?rQ ZF^qG*v U_ԫWա{8ID)KL[i{svn48jmYPc eq*͏[1Ȫ CDJŠ@?%okt߮*]"h/ 4TAC 0639єr(KqW,M k=|5xu}btLUBLje6_o>[da@1I*f~^.LMݟ{,UQWWN8c}g0u,ўIl:ؽW|)>*J}qD3ZrPR:3WvNtnRk4zT .VZ CQ*Tɨ|aCj}"hkzߩޕ T@4E\7lF0 vZ"!E-ߒ]TH\.aG-VOTS JXvx } U:bll1 NBoS]FˉqEMʷIgޑuElR|;B!{!n;u%G%I,j$.c\ÅYB'xU\8`2R1K@s?NaMrRLQn.,&T9j0@U¨D"&Z4s d ֜%?Ǚ:ED"Zjroa_y䆙0m#/KyZjF7/}^:FqUp^G*)~gO/F@&*d`ח.P0,l_5<(~HvD)p IZ5 09ٔRiE4C B,cg.Sxbfrͷ+=kN,YC)9 /*Mgd8£uI M$'$z^0Fz}\kn0ҿ"b>_G`_`'&ĥ^S%YDp2C\bfF`RUaӸgQjPH-> ASXV>Ip6ٗ QLm}Q"YMuiF Me6ݚVIFjTz(85CV>2ly`0,>8u=V3fáZc9IN=iLG{"ZN릢yiX᥁ygE1^J]7o&&dX\}e3QqNg)4#|ܨEVI<fT<7m}r-*ѴqdtO_|\! >Dcޝʰl`}X_m}Ӈgcʚ_aicP@vD?z> -Y$Շ7QAI\T%Jb?$UD2wvQ4Oo%Rv-8jZ?*5X+oBK>+*wLӤu_VO+9̇~jG{zao:\R,kq vqoy',2P, BD6B(akw;o۝_frI۝wdOgI_KS`=QDb \ mgM4"Z)YNdD%0OƟRnL߃sd~w~՛ ف? q<̓1Ll9LDMQ_|ˠCdm)7Fabd$-o2*z&\;l.,įگb}?5:Uip~ JX/?tr(ms]}Onϕz'g'' ltFk r ; OzAޥ5qHJEi7=ꪓP9-EߴHX5xgq$PdG%C=YeӬ* mcZfi9$1q+<=͞Ǒƥ, I0&m4&IEeoY&PhR擠^3D4sw柝Mg4108aC0Hd+=+&x/ 8?>6 _GQzFwklsw Rj)0Pw5G7/ٻtn֤k㝜]򿎞 yao.?@aGk]0hݍzk}nα[y-} Epsϕ6 OodUY?vgW?܅R&6jd~Ĭ $ Hup\{p_>Y?d_Ns3emitmw7''7߽}Syw̟A?:+-GnnˇqG//GA.ߌw2˛=:o f$Y(<ʹJyrA'8H/}?g)}Aٙ8 yemDp0cyIed ~;!q\i;9nE1*6 S/Q0^5ke~ߟ:,~pYQ^p:nZ[[Ԫn-;Me2>6C4[CX?'i!Ƴiq|vȯU_fa_\4}̀ .wc> M}xiDhU)}#Quc[WI~4guL]ؿf^c:@77?+[=I+N2麩vs>vJ6h*OMuϯjW.]d lySI.߭ ZNXw{(x [ǪH}EL24'nw͗Lsv:ONuA;ZhY惰{6O="xun* OVoFws ۳fi)~g4y:yf#v^ldv~7OeSNw$0A130d(Q+Dlإa.Htwju]6h6#5 %F9|}c>8, /Pr"Gic!#.AA91F4:u딑tf_H''Ud&!֓zͫ2fi98 d?Җ;_IߐٔM FA[Ί\܈ ih,Bre"{Z/#KLKzy$,bIONjfG#h!WxAW #zf /X23HrERAUևqgջP-5հU%yLHR =:xԟ1`7>S&0ݐ;4\>xt@f}T|qS, PI{㦮ዥ{ǸSϊGdC!6PǠT K{$wF`=7B5NXɓxOS6=H9IwF%yC8dN/ʍ-1Z_x֛sEFC0&hɻ<;;e]1#X0cK8"r/Kx: GKVrKl2@aHRz(kU^m7形*t;xB()FF7[ƅTݨ%Z. 3 p^q>n:$~55XTAW$aԜ! O ^ްtICm?Yl&B& XwP8%f!Ku >>zA1~ _ d $"peK B)&dI.)6䇍Zh+: Yl֦…II\T2@k&!DQu WuQK^% p$&rdPQf#VL.{a9jh9jh*RYrߨnfkW]-?ZzJPD4hRUI\r8u#/Ktw]OL%>r[- }+٠HBͥreϜgG=o ('DʭD!"g]i봠rDCi޵:d.pi#-FFq%`'9gN,("J_P-(>Nk#D̓z*3R`B(t/$'W吻UF-v"zx-Bl<]j:6 -ԅ-Z \뙹Β,\/+FUDjr4p$c')S= 7Z-G+ZC.rܽwG[{NU/?,WJeЯJ[-ָ9ŀh}(Y'8J,r D.{RWVaOkuVɚ>8Z0yfbp"XiiJ[+gۖ%a}-`r<6(FZl~^Z]>×4dZKHLIS$ް!y6RLlOsV9нj WW@A FעRfmerTKjI7-yx`㷥wUcGmjeno۲ʄC 1Z8O6,/ުenCFB7꽛nSw*nYbC!Zw|G0f .W QYOȋH1,8Vpda0Rڹf+|7¨džtZ^a<>dRd' 2[Q1yR)>9iճH"џfj?tAeH00ubdYkbES~q|]$Ӽ0||.giFMLt!ESJq-D1ٖtlJO*{Cuh. @PjA}#t`d FXLd5XnAGB|v@`~̐,:;4$+M,NYpPqĥ˫ 'F6z樝vumk#Zv\E8R*Zv\: MJRTvR*T#m)JS" x߼OdHѵ5|׼`+.}{|p/d坟ylS>I^ɠS S+ ]U"p#vsJkmH OېC8ľ H|9/~̄&|8v߯zPypHJ]Tj,²iy=ĐB KC>gY8%32I #(>lNu,Pr'F9#@r2gt3E6PO38rr[P(Ӱ( P*$ dlG¬DXq0DGG%ԂjI)mx! q\EsuY ZXЀ0k<hN&BS:5}C鰅b5w$)g@1^2Ih՜AX{ 2 E9Ǚ.c*`?E-VX& f%|Q jPuJLdQJDA#G(acD ]5l2/lY$C/@R CV5 ;PQI4kc c)'5[e9'P)U,IEw&Өf2/3Y&[1o^ Tu^uჇ)}K):f}eݡ V|g./OW,?0Z{*Ĺ yJ}rr={~ 'QNΖH`;ȔSg +ǜzlX~N뤋)S++rϔ|F/F-! ̼ ƆxC(N",Tl5&' $Seoev%ҙ"[dlwNW[ǿ$ݎ| _f+dz ;"0r9^lutyl1eO1[p0 G=x!؟A"Lu`SxWVcz*`.Qw17O#?Y14Sg8Gg3׷cbVY*O0%gNt ؑ8lMZpE+ϦQ%)՘:̊WDE~+on5bϨ7@]Qu8@)] h &%G ou<['hDbXc$KiO\83WWNqL((6ec %Dsg40EFfZ! ,RÅA8Y};PA$Q~mܿ w!#KbXidcq,-clƙcZzCZp$im0ԉ걦.\[̽XB(SiUgi#f>hϸlet0䉑_ZĩtYe5!e(1 tSC,^Sr-`D pfƵ˘;xs 2aSNnRc뛫 tCLQ 1z<=o*|%x;%sΩ_W/o>9T*J.B_&m7C!<*_.Y,Wv^Ouw٤ !/Fc$L3Lp05RTTl~:7H`8%'X bS ϥ1mPro,E2yL΀f$VɄF2b3EwLP /,pW2I]U VÄ8 k" .XrZ:`jtmVy'PvTXrـXAkv5;TDa#}]_*/oTd;k (TpFDf1+_Ƣ ;p={tS y~ Pj VSDsQ^7Y,2/",xŏr=]!|Y.+V`D aj_9qa0q\$Lf.?K.Pύ l%n[ҡW j5e,{uHB*vgQǭfP܃;#4 fs^2D5)mp[[+^h^ACR~FRDac$T5&PG7;wGV 5N Քs j=C̼ڣi=&!_0pcHOlBGifj:JzPEjp+XJ;9]<;(]JRA:tjE끣o N/h hpJy硡w|}.ltyT{k 4IcIPI7@u! 'vXR' `T MކjԙԘVT3S [?}тjUn{J9UjjN!ھNtHA-ޕH w!!_)V*|{foOI9hNwn'teB?־ZNvBBpm#STwҥөjNwn'H]evl־avBBpm.S`:,Mpm4'q]ôp!I7<&Z4Nq& Y !$Q2:`ނy8*XTS*oHRcMYf  BK pTI d5"Ϥw 7BkWwb-lbM䇺Ϫ̀d%hWڋ?&ByY\ż~)s[UgڷwMGBI^3`z:w?LEm4Fv?]YŒ pN˻!+G~1Y ȋXkט?\o[i}K+Btp]؇Ğ5y ,ѺK4/Ն|Ո}н" /ڔW0ʓ+.D)G| ,)EbOm#;fJ D'UucS$n6'y6:B{wjOS%Q;״rwɅ9Sg{ cerq0# H, aheV){#eM]ʐe7νS2nzs`@Z,6M;)Hr0 |ړ~&֥&m);os*q_fHjDY#Ljr~SC5ڐ5ϋύȉgngNbJ ϬZgby`!0Aa*2QfH 1RVZX%^fF Ŕ6Ork6-@9=V1cBҘ+X6yOVxr,'[XvHj<5X5IGoDS$0Js+֓o&NA>OvH;T_ `Xw`hENMkGtd+Z\L4ixS괵Q>\%ύvo= rm'-wՂpNVQYc"TO]2i'@%;ZM#!-  1+XO>KMƤ{*[[@&dns\2xw.VIz s[- ţD?NU}[:EP날:]Uئ6x0v.wwGh3d9_2&ⴄcscE 3f{Ou~znO,J6[na^ Sݯ#^4~E*"z^ m@:Dk@)gyxdIaD@(|L  !)*Քarg 5&B k6`k  wT>SrC\6m."j>Ī?iCeRF,.pb *)RK$ *aϥh-E,-xdYDU ;npϠSbrRmH3rg^<+Qi\w5Mִdc{G2X{ьj]V:ՉߑmP6E4[ 4<6,.״"6DKT^h-[9ڹ,17 -GI'7 b} ~-GD3?-߿>xxvg@]jtKX5yBl[uITE铳_O”Sg(8J5Z%S. ک 쑳[  6a/6Y<"dp ÂOsS4ט%Mgb7“j5^*Xc"?%,Aš10e\uoxG,]Ly[e]S 0G81og/mXxI؟9稠G|RM aSUg;=j-;fw%; 0s8űkP9̨P&$gql7fvU2jL]ddNs_e2G 6ȉ3 5Zhƀ0O VZB"3 ԅ nZp5 sj?+%RL1YBDE,%,a,ZyCG'#[i,Q|$}Is|B9o.k !]^# +KkqlՆ{WQmYrg? 12909ms (23:08:35.979) Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1754602208]: [12.909923606s] [12.909923606s] END Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.979817 4995 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.980539 4995 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 23:08:35 crc kubenswrapper[4995]: E0126 23:08:35.980797 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.980909 4995 trace.go:236] Trace[170435702]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 23:08:25.104) (total time: 10876ms): Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[170435702]: ---"Objects listed" error: 10876ms (23:08:35.980) Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[170435702]: [10.876401557s] [10.876401557s] END Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.981386 4995 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.982736 4995 trace.go:236] Trace[1552439728]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 23:08:23.563) (total time: 12419ms): Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1552439728]: ---"Objects listed" error: 12419ms (23:08:35.982) Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1552439728]: [12.419317108s] [12.419317108s] END Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.982778 4995 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.007565 4995 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.026819 4995 csr.go:261] certificate signing request csr-xt48l is approved, waiting to be issued Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.047026 4995 csr.go:257] certificate signing request csr-xt48l is issued Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.312937 4995 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.313223 4995 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.313258 4995 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.313321 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.164:59338->38.102.83.164:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e6a96374ef19f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:08:17.008742815 +0000 UTC m=+1.173450280,LastTimestamp:2026-01-26 23:08:17.008742815 +0000 UTC m=+1.173450280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.313438 4995 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.451495 4995 apiserver.go:52] "Watching apiserver" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.460752 4995 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.460912 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461277 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.461603 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461662 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.461684 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461719 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.461742 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461778 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.462088 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.463785 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464059 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464485 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464507 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464919 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.465294 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.466042 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.466731 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.467894 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:28:50.215358495 +0000 UTC Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.468054 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.556354 4995 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583501 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583544 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583567 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583684 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583740 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583768 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583791 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583821 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583852 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583882 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583911 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583925 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583939 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584025 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584059 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584088 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584133 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584163 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584178 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584200 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584233 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584289 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584317 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584339 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584365 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584391 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584418 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584441 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584490 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584511 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584539 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584565 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584620 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584645 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584675 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584701 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584723 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584750 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584775 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584809 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584832 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584860 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584889 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584912 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584938 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584949 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584964 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584997 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585022 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585050 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585075 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585116 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585144 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585174 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585197 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585228 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585286 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585309 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585334 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585368 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585470 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585503 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585529 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585557 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585579 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585604 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585628 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585651 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585676 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585701 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585724 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585760 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585786 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585812 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585863 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585889 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585910 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585933 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585957 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585984 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586008 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586034 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586063 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586120 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586149 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586490 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586524 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586776 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586876 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586932 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586964 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586998 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587030 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587053 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587057 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587299 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587325 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587349 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587369 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587391 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587485 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587505 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587512 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587527 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587551 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587573 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587594 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587616 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587638 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587656 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587677 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587698 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587714 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587751 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587792 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587812 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587831 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587850 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587874 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587894 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587914 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587934 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587953 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587971 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587991 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588013 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588034 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588053 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588072 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588092 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588125 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588142 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588164 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588204 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588223 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588242 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588261 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588281 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588320 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588338 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588368 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588391 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588409 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588444 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588462 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588481 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594995 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596706 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596756 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596791 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598146 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588035 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588182 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588405 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588850 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588951 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.589913 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.591483 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594500 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594752 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594781 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594808 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586749 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594939 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595211 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595441 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595435 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595537 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595690 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595691 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595765 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595911 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596148 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596512 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596789 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597071 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597080 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597096 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597228 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597318 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597591 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597621 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597828 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598059 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598142 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598239 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598647 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.601187 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.601271 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.101229808 +0000 UTC m=+21.265937273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.601402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.601213 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602366 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602628 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602732 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602918 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603061 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603138 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603163 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603355 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603382 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603438 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603484 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603529 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603553 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603574 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603613 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603635 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603658 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603691 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603714 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603751 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603770 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603790 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603809 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603840 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603860 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603888 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603926 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603945 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603967 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604002 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604019 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604039 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604070 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604092 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604918 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606626 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606830 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606878 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606908 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606937 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606963 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607017 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607041 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607067 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607087 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607122 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607150 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607169 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607186 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607208 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607228 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607245 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607262 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607298 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607374 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607387 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607400 4995 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607411 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607421 4995 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607431 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607441 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607450 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607461 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607471 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607482 4995 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607492 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607500 4995 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607510 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607519 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607530 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607541 4995 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607550 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607560 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607569 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607578 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607586 4995 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607596 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607605 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607615 4995 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607625 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607634 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607644 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607653 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607662 4995 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607671 4995 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607682 4995 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607692 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607702 4995 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607712 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607721 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607731 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607741 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607750 4995 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607761 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607771 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607780 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607789 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607799 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607808 4995 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607818 4995 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607827 4995 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607844 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607853 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607862 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607871 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609259 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609395 4995 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611034 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611766 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602517 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602727 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603023 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603209 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603454 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606464 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606615 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607265 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607426 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607528 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607690 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607963 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608031 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616371 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.608227 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608253 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608327 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608343 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608350 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608506 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608602 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608894 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609114 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609420 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609780 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609896 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610053 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610011 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610190 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610390 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610447 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610648 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610667 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610830 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611117 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611246 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611638 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611702 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611761 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612357 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612407 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612664 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612793 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612808 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612920 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612991 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613638 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613778 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613909 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.613991 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.614335 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.614669 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.614811 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.615181 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.615517 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.615698 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616117 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616676 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.616714 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.116693544 +0000 UTC m=+21.281401009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616788 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.616874 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.116867189 +0000 UTC m=+21.281574654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617179 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617185 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617243 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617881 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617977 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.621214 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.623283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624321 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624687 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624830 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624930 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.625448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.625771 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.626787 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627310 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627408 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627654 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627989 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.629004 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.629258 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629380 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629403 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629418 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629482 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.129462677 +0000 UTC m=+21.294170142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.629714 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630002 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630053 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630052 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630354 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630608 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630817 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630926 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630996 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631148 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631332 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631456 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631766 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631967 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631997 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632117 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632260 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632279 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632302 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632515 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632584 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632586 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632772 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.632973 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.633010 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.633029 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.633159 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.133067403 +0000 UTC m=+21.297775058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633407 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633554 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633361 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633928 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634091 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634084 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634314 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634388 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638024 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634886 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.635216 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.635570 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.635887 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.636045 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.636290 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638424 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.636487 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.637283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.637280 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.637593 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638438 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638722 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639234 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639298 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639553 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639967 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.642507 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.643305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.643401 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644144 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644404 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644491 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644681 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644699 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.645037 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.645168 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.645732 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646059 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646521 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" exitCode=255 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646565 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7"} Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646617 4995 scope.go:117] "RemoveContainer" containerID="f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.650698 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.655378 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.662396 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.662678 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.662987 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.664944 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.667305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.667304 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.675026 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.679809 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.680137 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.691170 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.700164 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709074 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709180 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709236 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709309 4995 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709325 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709339 4995 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709350 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709360 4995 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709371 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709382 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709393 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709402 4995 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709414 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709424 4995 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709435 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709445 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709454 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709463 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709472 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709321 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709483 4995 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709533 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709548 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709562 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709577 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709592 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709606 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709619 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709631 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709643 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709656 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709669 4995 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709680 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709691 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709703 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709714 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709728 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709739 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709750 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709762 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709774 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709786 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709797 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709808 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709820 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709832 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709843 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709855 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709870 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709882 4995 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709896 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709908 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709920 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709933 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709945 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709958 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709970 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709982 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709996 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710007 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710019 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710030 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710042 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710052 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710064 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710079 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710091 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710893 4995 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710910 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710925 4995 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710938 4995 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710949 4995 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710962 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710067 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710973 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711066 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711078 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711092 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711116 4995 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711130 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711144 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711153 4995 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711162 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711172 4995 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711181 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711191 4995 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711200 4995 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711214 4995 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711226 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711237 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711247 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711260 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711271 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711281 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711289 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711299 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711310 4995 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711320 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711329 4995 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711338 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711346 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711355 4995 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711363 4995 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711372 4995 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711381 4995 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711390 4995 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711398 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711407 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711415 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711423 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711432 4995 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711441 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711450 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711458 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711467 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711476 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711485 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711493 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711504 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711513 4995 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711521 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711531 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711539 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711547 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711556 4995 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711564 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711573 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711583 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711591 4995 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711603 4995 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711612 4995 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711621 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711629 4995 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711638 4995 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711648 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711657 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711665 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711674 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711685 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711694 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711704 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711713 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711724 4995 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711735 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711762 4995 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711772 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711781 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711790 4995 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.719390 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.729127 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.738664 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.751229 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.759464 4995 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.767839 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:30Z\\\",\\\"message\\\":\\\"W0126 23:08:19.601263 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 23:08:19.601594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769468899 cert, and key in /tmp/serving-cert-4050516234/serving-signer.crt, /tmp/serving-cert-4050516234/serving-signer.key\\\\nI0126 23:08:19.891540 1 observer_polling.go:159] Starting file observer\\\\nW0126 23:08:19.898437 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 23:08:19.898613 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 23:08:19.902820 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4050516234/tls.crt::/tmp/serving-cert-4050516234/tls.key\\\\\\\"\\\\nF0126 23:08:30.280915 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.775424 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.777598 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.789277 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85 WatchSource:0}: Error finding container 5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85: Status 404 returned error can't find the container with id 5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.789966 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.793023 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.800001 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.804967 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.810730 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5 WatchSource:0}: Error finding container d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5: Status 404 returned error can't find the container with id d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.817635 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.831626 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.841524 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.858533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.870372 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:30Z\\\",\\\"message\\\":\\\"W0126 23:08:19.601263 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 23:08:19.601594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769468899 cert, and key in /tmp/serving-cert-4050516234/serving-signer.crt, /tmp/serving-cert-4050516234/serving-signer.key\\\\nI0126 23:08:19.891540 1 observer_polling.go:159] Starting file observer\\\\nW0126 23:08:19.898437 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 23:08:19.898613 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 23:08:19.902820 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4050516234/tls.crt::/tmp/serving-cert-4050516234/tls.key\\\\\\\"\\\\nF0126 23:08:30.280915 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.048556 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 23:03:36 +0000 UTC, rotation deadline is 2026-12-18 01:23:28.544984682 +0000 UTC Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.048628 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7802h14m51.496359722s for next certificate rotation Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.115164 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.115320 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.115296911 +0000 UTC m=+22.280004376 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.215951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.216025 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.216054 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.216119 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216165 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216190 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216197 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216203 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216243 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216252 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216233215 +0000 UTC m=+22.380940680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216365 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216351868 +0000 UTC m=+22.381059333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216382 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216372929 +0000 UTC m=+22.381080404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216494 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216570 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216589 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216699 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216665816 +0000 UTC m=+22.381373481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.468303 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:17:42.46877918 +0000 UTC Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.650286 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3cf15b92960d60889cb4e79030289e7f6c110c85abee044dbc223e964c6749dc"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.652335 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.652367 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.652379 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.654243 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.654268 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.656114 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.657993 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.658136 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.701141 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.740376 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.765582 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.781425 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.796498 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.811661 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:30Z\\\",\\\"message\\\":\\\"W0126 23:08:19.601263 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 23:08:19.601594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769468899 cert, and key in /tmp/serving-cert-4050516234/serving-signer.crt, /tmp/serving-cert-4050516234/serving-signer.key\\\\nI0126 23:08:19.891540 1 observer_polling.go:159] Starting file observer\\\\nW0126 23:08:19.898437 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 23:08:19.898613 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 23:08:19.902820 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4050516234/tls.crt::/tmp/serving-cert-4050516234/tls.key\\\\\\\"\\\\nF0126 23:08:30.280915 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.824492 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.842255 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.858537 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.870471 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.889911 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.906285 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.920920 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.937271 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.124437 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.124598 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.124584121 +0000 UTC m=+24.289291586 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145315 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-m8zlz"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145591 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hln88"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145723 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pkt82"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145735 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145880 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.146310 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.146734 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sj7pr"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.147203 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.153450 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.155993 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156087 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156436 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156661 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156753 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156834 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156945 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157029 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157160 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157179 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157062 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157308 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157320 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157961 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.187181 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.207356 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.223565 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-netns\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224863 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-os-release\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224877 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-system-cni-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224894 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-os-release\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224912 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-conf-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224933 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-bin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224948 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clj2d\" (UniqueName: \"kubernetes.io/projected/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-kube-api-access-clj2d\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224961 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224974 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-multus\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224991 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-daemon-config\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225007 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-etc-kubernetes\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225028 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225140 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rmfp\" (UniqueName: \"kubernetes.io/projected/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-kube-api-access-7rmfp\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225163 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225172 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225201 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225215 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225259 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.225243679 +0000 UTC m=+24.389951144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225259 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225301 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.22528937 +0000 UTC m=+24.389996835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225180 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-multus-certs\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225326 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225341 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cni-binary-copy\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225364 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225382 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-hostroot\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225397 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pf7\" (UniqueName: \"kubernetes.io/projected/4ba70657-ea12-4a85-9ec3-c1423b5b6912-kube-api-access-25pf7\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225412 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884rn\" (UniqueName: \"kubernetes.io/projected/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-kube-api-access-884rn\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225426 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-k8s-cni-cncf-io\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225440 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-kubelet\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225454 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-mcd-auth-proxy-config\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225474 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-hosts-file\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225485 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225513 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225524 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225573 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.225552716 +0000 UTC m=+24.390260181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225493 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-system-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225635 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-proxy-tls\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225654 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-socket-dir-parent\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225676 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225692 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-rootfs\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225724 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cnibin\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225742 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cnibin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225778 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225821 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225849 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.225839853 +0000 UTC m=+24.390547318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.235500 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.244913 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.257650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.267941 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.276392 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.287346 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.299041 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.311984 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.324650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327007 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327188 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cni-binary-copy\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327292 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-hostroot\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327399 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25pf7\" (UniqueName: \"kubernetes.io/projected/4ba70657-ea12-4a85-9ec3-c1423b5b6912-kube-api-access-25pf7\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327494 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884rn\" (UniqueName: \"kubernetes.io/projected/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-kube-api-access-884rn\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327582 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-k8s-cni-cncf-io\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327670 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-hosts-file\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327421 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-hostroot\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327811 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-k8s-cni-cncf-io\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327808 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327852 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-hosts-file\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327923 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cni-binary-copy\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327944 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-system-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-system-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328168 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-kubelet\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328259 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-mcd-auth-proxy-config\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328361 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328449 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-proxy-tls\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328553 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328635 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-socket-dir-parent\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328717 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-rootfs\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328802 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cnibin\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328899 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cnibin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-netns\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329127 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-os-release\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329228 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-system-cni-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329322 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-os-release\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329429 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-conf-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329531 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-netns\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328848 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-mcd-auth-proxy-config\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329464 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cnibin\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329487 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-rootfs\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329261 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329510 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-system-cni-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328912 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329610 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-os-release\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329292 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cnibin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328262 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-kubelet\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329672 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-conf-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329672 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-os-release\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329454 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-socket-dir-parent\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329773 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-bin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329538 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-bin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330269 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clj2d\" (UniqueName: \"kubernetes.io/projected/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-kube-api-access-clj2d\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330385 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rmfp\" (UniqueName: \"kubernetes.io/projected/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-kube-api-access-7rmfp\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330479 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330568 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-multus\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330640 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-multus\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330598 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330825 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-daemon-config\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330922 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-etc-kubernetes\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331021 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-multus-certs\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331171 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-multus-certs\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331287 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-etc-kubernetes\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331339 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-daemon-config\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.332624 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-proxy-tls\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.337716 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.344066 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pf7\" (UniqueName: \"kubernetes.io/projected/4ba70657-ea12-4a85-9ec3-c1423b5b6912-kube-api-access-25pf7\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.347471 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rmfp\" (UniqueName: \"kubernetes.io/projected/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-kube-api-access-7rmfp\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.349057 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884rn\" (UniqueName: \"kubernetes.io/projected/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-kube-api-access-884rn\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.349152 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clj2d\" (UniqueName: \"kubernetes.io/projected/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-kube-api-access-clj2d\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.354249 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.372203 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.381434 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.385988 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.391435 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.393591 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.401874 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.419264 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.439384 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.460010 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.462836 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.468906 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:17:32.990702686 +0000 UTC Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.471407 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.473166 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: W0126 23:08:38.478572 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15f852ca_fb3b_4ad2_836a_d0dbe735dde4.slice/crio-5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f WatchSource:0}: Error finding container 5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f: Status 404 returned error can't find the container with id 5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.483114 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.490504 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.493931 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: W0126 23:08:38.496583 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7acc40a_3d17_4c4f_8300_2fa8c89564a9.slice/crio-ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2 WatchSource:0}: Error finding container ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2: Status 404 returned error can't find the container with id ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2 Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.516280 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.516301 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.516407 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.516467 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.516525 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.516852 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.521232 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.521370 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.521891 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.523420 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.524142 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.525240 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.525883 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.526651 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.528492 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.529440 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.531759 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.532390 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.533830 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.534440 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.536671 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.537427 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.538111 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.539302 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.539793 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.541518 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.542500 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.543142 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.544791 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.545662 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.547408 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.547975 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.553787 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.554777 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.554899 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.555424 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.556703 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.557547 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.558755 4995 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.558874 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.561159 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.562058 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.563414 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.565239 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.566231 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.567472 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.568266 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.570089 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.572314 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.573034 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.574136 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.574738 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.575882 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.576339 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.577279 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.577766 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.578830 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.579332 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.580169 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.582456 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.583297 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.583919 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.584592 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.585036 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.589633 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.591699 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.591915 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.591955 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.592223 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.592446 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.594586 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.594739 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.609557 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.632395 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.645607 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.662051 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.665241 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"d72fe382310a4aad8215c99e864bc042e6eccd79c55b7cfb2bf698a1d63951d8"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.667590 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8zlz" event={"ID":"15f852ca-fb3b-4ad2-836a-d0dbe735dde4","Type":"ContainerStarted","Data":"5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.668637 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerStarted","Data":"ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.671370 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"d4d65edfef32fd1663a349c7d8d4c958f5f32a84fb38e5a093ecf4fa0d17a6b2"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.678863 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.693638 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.706748 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.728191 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735063 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735121 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735192 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735226 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735243 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735262 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735277 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735300 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735328 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735342 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735356 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735371 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735398 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735415 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735429 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735456 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735470 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735590 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.748661 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.763155 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.773884 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.788608 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.807581 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.827072 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836401 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836477 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836504 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836530 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836531 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836608 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836632 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836650 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836629 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836571 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836664 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836496 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836727 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836751 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836789 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836898 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836909 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836823 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836940 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836967 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837031 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837049 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837073 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837090 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837129 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837131 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837110 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837194 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837197 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837208 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837239 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837153 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837225 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837443 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837797 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.844996 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.847083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.857352 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.857493 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.870620 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.879853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.891658 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.940957 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: W0126 23:08:38.953148 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4486f1_6ac2_4655_aff8_634049c9aa6c.slice/crio-0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52 WatchSource:0}: Error finding container 0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52: Status 404 returned error can't find the container with id 0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52 Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.469960 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:15:53.667071392 +0000 UTC Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.677694 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.677741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.680519 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.681894 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8zlz" event={"ID":"15f852ca-fb3b-4ad2-836a-d0dbe735dde4","Type":"ContainerStarted","Data":"006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.683166 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" exitCode=0 Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.683246 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.683292 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.684993 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03" exitCode=0 Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.685143 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.686327 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.693853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.708985 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.733984 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.751897 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.767844 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.782037 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.793297 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.804645 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.816270 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.825591 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.836557 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.855754 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.867284 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.878949 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.894827 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.913681 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.928451 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.939709 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.953089 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.966317 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.980650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.993389 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.003988 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.027603 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.090673 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.103419 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.115190 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.118748 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.125336 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.128952 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.145861 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.152999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.153155 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.153141517 +0000 UTC m=+28.317848982 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.160671 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.173522 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.188674 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.206933 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.221291 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.235284 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.250029 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254253 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254287 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254314 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254416 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254431 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254441 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254477 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.254464461 +0000 UTC m=+28.419171926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254731 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254760 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.254751808 +0000 UTC m=+28.419459273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254812 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254824 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254833 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254858 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.25484996 +0000 UTC m=+28.419557425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254910 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254935 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.254928062 +0000 UTC m=+28.419635527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.273269 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.286249 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.298430 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.311376 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.327188 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.344153 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.358088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.378822 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.391267 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.423161 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.462000 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.471285 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:30:16.489038997 +0000 UTC Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.501664 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.516316 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.516354 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.516458 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.516508 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.516639 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.516701 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.542682 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.582461 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.620303 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.660196 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.691777 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9" exitCode=0 Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.691857 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697669 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697720 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697733 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697745 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697766 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.706777 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.752230 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.782654 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.787094 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xltwc"] Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.787525 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.814025 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.833829 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.853453 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.858866 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d39f52ec-0319-4f38-b9f5-7f472d8006c5-serviceca\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.858919 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzch\" (UniqueName: \"kubernetes.io/projected/d39f52ec-0319-4f38-b9f5-7f472d8006c5-kube-api-access-vwzch\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.858952 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d39f52ec-0319-4f38-b9f5-7f472d8006c5-host\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.874306 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.899730 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.948051 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.959917 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d39f52ec-0319-4f38-b9f5-7f472d8006c5-serviceca\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.959983 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwzch\" (UniqueName: \"kubernetes.io/projected/d39f52ec-0319-4f38-b9f5-7f472d8006c5-kube-api-access-vwzch\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.960007 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d39f52ec-0319-4f38-b9f5-7f472d8006c5-host\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.960062 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d39f52ec-0319-4f38-b9f5-7f472d8006c5-host\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.960910 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d39f52ec-0319-4f38-b9f5-7f472d8006c5-serviceca\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.981538 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.013125 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwzch\" (UniqueName: \"kubernetes.io/projected/d39f52ec-0319-4f38-b9f5-7f472d8006c5-kube-api-access-vwzch\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.043199 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.086185 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.100450 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.124088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.168272 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.203610 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.243157 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.284276 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.322505 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.363033 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.401768 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.442724 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.473752 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:03:50.643100995 +0000 UTC Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.480895 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.520660 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.572736 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.629086 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.647552 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.680907 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.702593 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f" exitCode=0 Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.702671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f"} Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.704241 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xltwc" event={"ID":"d39f52ec-0319-4f38-b9f5-7f472d8006c5","Type":"ContainerStarted","Data":"54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae"} Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.704290 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xltwc" event={"ID":"d39f52ec-0319-4f38-b9f5-7f472d8006c5","Type":"ContainerStarted","Data":"36b11a7b7bc03340e54279e0f1324786df133dfe94d417d87f36296366d15d3b"} Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.724634 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.763477 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.804621 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.841997 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.881521 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.927655 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.960075 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.003636 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.044322 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.078596 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.127608 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.161021 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.210371 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.252401 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.291040 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.325003 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.363301 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.381388 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383015 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383046 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383168 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.403257 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.455336 4995 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.455627 4995 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456888 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456943 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456955 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.474278 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:47:15.711049155 +0000 UTC Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.474764 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.482373 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.492830 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496514 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496572 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496582 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.512509 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516428 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516444 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516469 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516483 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516551 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516624 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516643 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.516730 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.516851 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.517022 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.522766 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.533775 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537258 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537275 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537285 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.549481 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.549601 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551144 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551178 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551208 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551219 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.567396 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.601467 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.642009 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653272 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653310 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653323 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653332 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.712185 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89" exitCode=0 Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.712225 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.729760 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.747938 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755086 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755174 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755231 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755242 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.762429 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.801498 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.841516 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857177 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857200 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.880584 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.920470 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959444 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959486 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.961768 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.007774 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.043501 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062268 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062332 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062364 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.082223 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.125693 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165364 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165383 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.169729 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.208878 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.243046 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267897 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267937 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267949 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267958 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371563 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371578 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.425232 4995 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474165 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474202 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474220 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474231 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474428 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:05:32.449251203 +0000 UTC Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576646 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576658 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576666 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678959 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678974 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678993 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.679007 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.725984 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.728550 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a" exitCode=0 Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.728584 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.745419 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.761206 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.775175 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.775773 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:43 crc kubenswrapper[4995]: E0126 23:08:43.775911 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784734 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784744 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784757 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784766 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.785992 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.799001 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.831602 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.842733 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.853689 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.863926 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.871874 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888658 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888721 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888731 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.891540 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.903633 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.922040 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.933802 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.947995 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.959818 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990506 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093139 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093154 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093166 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.191793 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.192007 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.191974877 +0000 UTC m=+36.356682352 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196063 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196206 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196241 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196295 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.292767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.292999 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.293133 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.293257 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.292939 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293315 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293330 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293378 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293361532 +0000 UTC m=+36.458069007 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293064 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293198 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293710 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293760 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293611 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293565217 +0000 UTC m=+36.458272712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293838 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293876 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293841873 +0000 UTC m=+36.458549378 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293924 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293906225 +0000 UTC m=+36.458613820 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299385 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299431 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299468 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.402857 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.402908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.403200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.403250 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.403265 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.474733 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:36:40.328450127 +0000 UTC Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.505968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506008 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506059 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.519339 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.519651 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.520087 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.520328 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.520200 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.520425 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608640 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608658 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608677 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608689 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711411 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711438 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711450 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.735012 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26" exitCode=0 Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.735055 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.748510 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.762031 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.794060 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813031 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813743 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813772 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813794 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813804 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.832457 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.847266 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.864746 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.878415 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.893581 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.906295 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916773 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916806 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916848 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.917041 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.927772 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.946287 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.957995 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.967059 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.018976 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019050 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121695 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121755 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121773 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121786 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223701 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223716 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223727 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326575 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326609 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326620 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326647 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429329 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429342 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429351 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.475559 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:21:20.181738107 +0000 UTC Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532292 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532389 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532404 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635366 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635384 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635408 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635426 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.738014 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.744305 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.745749 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.745782 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.748722 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerStarted","Data":"4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.763903 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.787810 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.788352 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.788654 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.802225 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.814128 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.828731 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.839980 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840157 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840486 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840649 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.856266 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.869609 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.903129 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.920778 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943280 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943318 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.955985 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.984991 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.002865 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.019592 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.032290 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045738 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045801 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045865 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045921 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.053115 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.067137 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.077942 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.092230 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.105357 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.116902 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.134979 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148010 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148088 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148847 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.173983 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.187166 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.203538 4995 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.205067 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.218353 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.235470 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.248250 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250818 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250834 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250845 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.267778 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353045 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353087 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353137 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353146 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455668 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455678 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455704 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.476063 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:30:31.446655507 +0000 UTC Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.516409 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.516499 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.516525 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:46 crc kubenswrapper[4995]: E0126 23:08:46.516672 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:46 crc kubenswrapper[4995]: E0126 23:08:46.516756 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:46 crc kubenswrapper[4995]: E0126 23:08:46.516872 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.533801 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.551495 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558310 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558379 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558432 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558452 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.567143 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.604151 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.616941 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.632445 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.648384 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.659421 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660937 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.676717 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.690064 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.699651 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.717964 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.730595 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.745153 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.751367 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.755444 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765437 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765623 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765645 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765659 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765669 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868951 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868960 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868972 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868982 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972768 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972902 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076257 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076284 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076306 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.178942 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.178979 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.178990 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.179004 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.179014 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281980 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281997 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384616 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384744 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.476742 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:41:07.161277231 +0000 UTC Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487309 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487379 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589387 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589450 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589464 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589487 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589500 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.691947 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692343 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692357 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.753742 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795256 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795281 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.897905 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898088 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898252 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898309 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000940 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000952 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103741 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103788 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.206932 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.206978 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.206989 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.207005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.207017 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309288 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309359 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309368 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.413964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414031 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414051 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414080 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414131 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.477498 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:55:37.611092392 +0000 UTC Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.516317 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.516352 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:48 crc kubenswrapper[4995]: E0126 23:08:48.516471 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.516321 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:48 crc kubenswrapper[4995]: E0126 23:08:48.516684 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:48 crc kubenswrapper[4995]: E0126 23:08:48.516874 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517440 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517542 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517585 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620778 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723689 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723808 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723829 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.759205 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/0.log" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.762728 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46" exitCode=1 Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.762787 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.763599 4995 scope.go:117] "RemoveContainer" containerID="782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.803216 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.818448 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830414 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830432 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830458 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830478 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.833738 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.845889 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.860657 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.877975 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.900761 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.922763 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933593 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933603 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933619 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933631 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.937887 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.955046 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.965825 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.977782 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.988313 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.002281 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.011017 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036055 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036064 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.087308 4995 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.137928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138204 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138334 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138402 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240913 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240972 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240998 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343056 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343117 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343133 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343154 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343166 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445493 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445952 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.478563 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:13:26.949519758 +0000 UTC Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.547682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548050 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650577 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650619 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.695935 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7"] Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.696438 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.698458 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.698563 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.710449 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.722413 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.732202 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.741562 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753277 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753319 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.754774 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.765811 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.767926 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.773845 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/0.log" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.777838 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" exitCode=1 Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.777880 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.777916 4995 scope.go:117] "RemoveContainer" containerID="782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.780476 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:08:49 crc kubenswrapper[4995]: E0126 23:08:49.780657 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.786254 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.799042 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.818523 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.832891 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.844933 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.854999 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.855336 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.855518 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.855716 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wp4c\" (UniqueName: \"kubernetes.io/projected/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-kube-api-access-5wp4c\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856146 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856177 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856189 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.858638 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.872388 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.886467 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.901427 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.923908 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.944380 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957444 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957585 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957610 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wp4c\" (UniqueName: \"kubernetes.io/projected/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-kube-api-access-5wp4c\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958734 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958746 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958775 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.959333 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.959947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.960449 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.964191 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.970533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.973758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wp4c\" (UniqueName: \"kubernetes.io/projected/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-kube-api-access-5wp4c\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.980572 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.998231 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.009465 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.013208 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: W0126 23:08:50.019906 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef1196f_dfec_4c45_9abc_0cd1df4bc941.slice/crio-0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433 WatchSource:0}: Error finding container 0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433: Status 404 returned error can't find the container with id 0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433 Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.026121 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.036168 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.051451 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060585 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060595 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.063431 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.076427 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.087231 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.096962 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.110303 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.119064 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.128021 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163444 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163470 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163481 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266308 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266366 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266385 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266425 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369215 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369308 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369333 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369355 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472578 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472637 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472655 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.479171 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:48:52.144118087 +0000 UTC Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.517064 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.517174 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.517180 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:50 crc kubenswrapper[4995]: E0126 23:08:50.517312 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:50 crc kubenswrapper[4995]: E0126 23:08:50.517487 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:50 crc kubenswrapper[4995]: E0126 23:08:50.517777 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.576780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.576886 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.576964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.577040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.577071 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679549 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679602 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782605 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782646 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782696 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.786379 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" event={"ID":"1ef1196f-dfec-4c45-9abc-0cd1df4bc941","Type":"ContainerStarted","Data":"ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.786421 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" event={"ID":"1ef1196f-dfec-4c45-9abc-0cd1df4bc941","Type":"ContainerStarted","Data":"70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.786437 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" event={"ID":"1ef1196f-dfec-4c45-9abc-0cd1df4bc941","Type":"ContainerStarted","Data":"0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.789580 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.806405 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.818896 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.834415 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.860537 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.876294 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.885588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.885926 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.886135 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.886350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.886530 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.893713 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.905497 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.918882 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.933812 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.947347 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.960374 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.982410 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989524 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989570 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989584 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989594 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.995415 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.004153 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.013866 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.029091 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091772 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091783 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091796 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091807 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.193957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194003 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194014 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194030 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194042 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296174 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296217 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296242 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399413 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399490 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399517 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399533 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.479731 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:44:57.429022815 +0000 UTC Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502342 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502417 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502499 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.530243 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vlmfg"] Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.530930 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: E0126 23:08:51.531042 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.567145 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.581486 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtg8\" (UniqueName: \"kubernetes.io/projected/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-kube-api-access-5xtg8\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.581536 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.588331 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605259 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605320 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605363 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605382 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.608785 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.628323 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.644969 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.664603 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.682211 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtg8\" (UniqueName: \"kubernetes.io/projected/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-kube-api-access-5xtg8\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.682327 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: E0126 23:08:51.682494 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:51 crc kubenswrapper[4995]: E0126 23:08:51.682611 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.182579991 +0000 UTC m=+36.347287486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.688465 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.705722 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtg8\" (UniqueName: \"kubernetes.io/projected/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-kube-api-access-5xtg8\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708888 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708935 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708954 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.714766 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.731404 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.763121 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.800898 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811961 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811974 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811991 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.812003 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.817268 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.828832 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.846442 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.858862 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.867227 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.877383 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915000 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915054 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915067 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915085 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915125 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.018902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.018967 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.018985 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.019009 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.019027 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122730 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122786 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.186900 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.187067 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.187215 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:53.18718079 +0000 UTC m=+37.351888315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.225911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226088 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.287986 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.288355 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.288305679 +0000 UTC m=+52.453013184 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329471 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329532 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329586 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329608 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389072 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389309 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389353 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389364 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389377 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389453 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389489 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389525 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.389496189 +0000 UTC m=+52.554203694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389569 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.38955023 +0000 UTC m=+52.554257725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389597 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.389581521 +0000 UTC m=+52.554289026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389489 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389657 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389680 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389751 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.389731444 +0000 UTC m=+52.554438949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432954 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432970 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432982 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.480852 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:11:21.999477335 +0000 UTC Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.516776 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.516875 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.516928 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.516776 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.517020 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.517164 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535718 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535791 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535816 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535869 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637560 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637593 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739660 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739733 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841325 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841333 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841345 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841355 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922483 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922501 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.945883 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950553 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950570 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.967152 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971397 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971426 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971439 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.983474 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986590 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986622 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:52.999851 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003922 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.024308 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:53Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.024458 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026260 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026371 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129039 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129143 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129171 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129201 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129223 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.196063 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.196377 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.196472 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:55.19644854 +0000 UTC m=+39.361156045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232316 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232341 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232399 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335230 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335280 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335295 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335306 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439256 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439304 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.481162 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:52:20.727049603 +0000 UTC Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.516853 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.516988 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542174 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542236 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.645604 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.645915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.646153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.646323 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.646445 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749593 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749651 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749763 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853492 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853584 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853636 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957073 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957167 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957242 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060645 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060708 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060724 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060737 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163248 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265857 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265920 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265942 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265977 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368847 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368912 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472871 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472897 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472914 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.482247 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:50:03.504348274 +0000 UTC Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.516218 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.516258 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.516368 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:54 crc kubenswrapper[4995]: E0126 23:08:54.516523 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:54 crc kubenswrapper[4995]: E0126 23:08:54.516650 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:54 crc kubenswrapper[4995]: E0126 23:08:54.516843 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575281 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575371 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575399 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575421 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684550 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684655 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684671 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787459 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787484 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787506 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.890388 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.890757 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.890940 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.891164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.891389 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.994877 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995163 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995360 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995455 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.098975 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099088 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099213 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.201886 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.201960 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.201982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.202010 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.202031 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.216910 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:55 crc kubenswrapper[4995]: E0126 23:08:55.217205 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:55 crc kubenswrapper[4995]: E0126 23:08:55.217313 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:59.217282652 +0000 UTC m=+43.381990167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305021 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305054 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305063 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305088 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408496 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408525 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408536 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.482388 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:00:14.91447647 +0000 UTC Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512471 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512573 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512604 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512625 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.516760 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:55 crc kubenswrapper[4995]: E0126 23:08:55.516964 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615944 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615990 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.616012 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720167 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720248 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720305 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822828 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822886 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822909 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926063 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926169 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926227 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926248 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029814 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029903 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029932 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029950 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.133769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134173 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134206 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134230 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.236894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237208 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237338 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237624 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340784 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340855 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340914 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.482757 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:43:11.767851671 +0000 UTC Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.517398 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.517398 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.517642 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:56 crc kubenswrapper[4995]: E0126 23:08:56.518129 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:56 crc kubenswrapper[4995]: E0126 23:08:56.518264 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:56 crc kubenswrapper[4995]: E0126 23:08:56.518387 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.518595 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.537435 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556600 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556719 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556834 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.572492 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.591406 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.604336 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.616754 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.628003 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.638853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.652663 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659677 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659689 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.667763 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.678795 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.690264 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.714270 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.728304 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.747562 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761467 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761525 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761537 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761594 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.763742 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.773660 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.816722 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.818612 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.818992 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.834272 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.850076 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.861190 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.863875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864202 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864283 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.870485 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.885313 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.901497 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.911870 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.924709 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.938425 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.951484 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.965378 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969843 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969854 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.979466 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.989005 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.006634 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.019053 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.030385 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.041361 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072454 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072804 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.158015 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.158860 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:08:57 crc kubenswrapper[4995]: E0126 23:08:57.159049 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.174193 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176524 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176548 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176564 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.187029 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.201336 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.214288 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.226793 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.239454 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.251442 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.270881 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.278948 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.278989 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.279001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.279016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.279030 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.287018 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.298472 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.312763 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.336189 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.354437 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.366287 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.380697 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381659 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381668 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381693 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.395750 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.425454 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.482857 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:28:34.798164817 +0000 UTC Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484865 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484913 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484926 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484958 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.517040 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:57 crc kubenswrapper[4995]: E0126 23:08:57.517192 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587593 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587603 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690257 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690285 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792325 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792363 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894801 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894830 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.996991 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997046 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997117 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101141 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101165 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101179 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204012 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204079 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204641 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307378 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307419 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307430 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307447 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307458 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.409528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.409821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.409928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.410090 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.410262 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.483708 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:39:46.160438828 +0000 UTC Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512764 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512774 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.516921 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.516935 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.517180 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:58 crc kubenswrapper[4995]: E0126 23:08:58.517306 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:58 crc kubenswrapper[4995]: E0126 23:08:58.517514 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:58 crc kubenswrapper[4995]: E0126 23:08:58.517577 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615608 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615631 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717596 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717628 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717640 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717656 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717668 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820331 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820389 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820398 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923712 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923725 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923744 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923758 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025946 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025958 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.128887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129387 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129471 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.231930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.231979 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.232026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.232048 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.232058 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.255585 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:59 crc kubenswrapper[4995]: E0126 23:08:59.255754 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:59 crc kubenswrapper[4995]: E0126 23:08:59.255823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:07.255805854 +0000 UTC m=+51.420513329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334672 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334736 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334766 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437151 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437187 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437198 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.485343 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:02:39.185835455 +0000 UTC Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.516671 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:59 crc kubenswrapper[4995]: E0126 23:08:59.516838 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539549 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539596 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539620 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642284 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642340 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642379 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745660 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745716 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745740 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848322 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848377 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848430 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848449 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848460 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950719 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950782 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950791 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053163 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053221 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155793 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155834 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258134 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258524 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.359995 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360046 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360073 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462517 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462569 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462583 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462593 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.486197 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:53:11.123137008 +0000 UTC Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.516900 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.516968 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.516974 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:00 crc kubenswrapper[4995]: E0126 23:09:00.517075 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:00 crc kubenswrapper[4995]: E0126 23:09:00.517217 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:00 crc kubenswrapper[4995]: E0126 23:09:00.517291 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565264 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565276 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565293 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565305 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667484 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667493 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667516 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770407 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770470 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770483 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770492 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872326 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975669 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078648 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078688 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078738 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180594 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180640 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180663 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282812 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282836 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282849 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282858 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385864 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385918 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385927 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.486437 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:11:54.278255027 +0000 UTC Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488620 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488636 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.516888 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:01 crc kubenswrapper[4995]: E0126 23:09:01.517052 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.591885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.591962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.591987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.592011 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.592028 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695275 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798750 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798818 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798841 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798897 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901787 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901853 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901870 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901912 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108170 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108262 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108278 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108304 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108321 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211447 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314808 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.417947 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418052 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418077 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418095 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.487036 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:41:01.685706856 +0000 UTC Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.516851 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.516935 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.516856 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:02 crc kubenswrapper[4995]: E0126 23:09:02.517040 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:02 crc kubenswrapper[4995]: E0126 23:09:02.517231 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:02 crc kubenswrapper[4995]: E0126 23:09:02.517426 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.521941 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522018 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522041 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522057 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625468 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625605 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727762 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727825 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727841 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727851 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830733 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934496 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934570 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934617 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037860 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037929 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037946 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037986 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.113985 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.128784 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140714 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140776 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140799 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140827 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140848 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.157759 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.176414 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.197608 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.212816 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.224164 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.240146 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243208 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243225 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.261799 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.281880 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.315050 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.336019 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345713 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.358132 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.379068 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397810 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397906 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397937 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397960 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.398360 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.411525 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.415164 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419524 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419555 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419567 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.427754 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.437348 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.438555 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441737 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441776 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441792 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.448874 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.453963 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457094 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.466914 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469710 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.480947 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.481127 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482550 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482606 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.487828 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:49:58.13719285 +0000 UTC Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.516318 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.516437 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584233 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584272 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584284 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584313 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688156 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688209 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791440 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894458 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894512 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894527 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894539 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997280 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997358 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997386 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099938 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099948 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099963 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099973 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202725 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202804 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202850 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306806 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306825 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410659 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410715 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410732 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.488333 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:52:25.540019378 +0000 UTC Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514258 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514316 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.516602 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.516617 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:04 crc kubenswrapper[4995]: E0126 23:09:04.516808 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:04 crc kubenswrapper[4995]: E0126 23:09:04.516848 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.517133 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:04 crc kubenswrapper[4995]: E0126 23:09:04.517423 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617412 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720516 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720642 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720704 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720719 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.823753 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.823998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.824090 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.824270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.824403 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927857 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030703 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030802 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030856 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134010 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134057 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134080 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237011 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237052 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237075 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340890 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340926 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443823 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443848 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443865 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.488736 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:53:22.10321458 +0000 UTC Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.516340 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:05 crc kubenswrapper[4995]: E0126 23:09:05.516629 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.546984 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547060 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547075 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650227 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753564 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753586 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753598 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857378 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857509 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857532 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961032 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961134 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961157 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961175 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.064681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065317 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065400 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065568 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169031 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169079 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169126 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169162 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169185 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272847 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272866 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376419 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376441 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376463 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376480 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480279 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480379 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480432 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.489763 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:52:40.596810736 +0000 UTC Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.516638 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.516675 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.516782 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:06 crc kubenswrapper[4995]: E0126 23:09:06.517297 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:06 crc kubenswrapper[4995]: E0126 23:09:06.517061 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:06 crc kubenswrapper[4995]: E0126 23:09:06.517460 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.537658 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.556684 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.572203 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582537 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582579 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.593613 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.615775 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.631488 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.661193 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.679354 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686490 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686517 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686576 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.695653 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.714165 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.737577 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.758174 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.775827 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790176 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790335 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790357 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.793671 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.807663 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.825703 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.856455 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.873211 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893773 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893823 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893834 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893862 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997048 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997159 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997203 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100752 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100826 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100852 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100903 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203804 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203835 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203859 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307216 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307232 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307271 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.336682 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:07 crc kubenswrapper[4995]: E0126 23:09:07.336819 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:07 crc kubenswrapper[4995]: E0126 23:09:07.336882 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:23.336864524 +0000 UTC m=+67.501571999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410352 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410377 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410433 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.490913 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:36:35.534286534 +0000 UTC Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514518 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514609 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514673 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.516986 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:07 crc kubenswrapper[4995]: E0126 23:09:07.517220 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618400 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618419 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618450 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618474 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721342 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721361 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721386 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721405 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825423 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825516 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929464 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929513 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929532 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929563 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032560 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032610 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135714 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135782 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135801 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135852 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240628 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240670 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344609 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344674 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344722 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344740 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.349421 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.349894 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.349823521 +0000 UTC m=+84.514531026 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.448703 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.448833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.448899 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.449001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.449031 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451367 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451459 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451502 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451573 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451681 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451689 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451731 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451780 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451808 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451739 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451831 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.451772939 +0000 UTC m=+84.616480434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451851 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451862 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.451848901 +0000 UTC m=+84.616556396 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451913 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.451892682 +0000 UTC m=+84.616600187 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451827 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.452085 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.452046026 +0000 UTC m=+84.616753571 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.492196 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:54:29.693800367 +0000 UTC Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.516621 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.516676 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.516727 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.516897 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.517031 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.517205 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551447 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551526 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551543 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551563 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551602 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654735 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654785 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654824 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757414 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757427 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757436 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860029 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860152 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860181 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860200 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.963451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.963880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.964456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.964686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.964819 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072878 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176036 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.177008 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.279923 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.279974 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.279987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.280003 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.280016 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.383062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.383412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.383695 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.384090 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.384387 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487909 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487971 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.488010 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.492721 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:47:56.190066349 +0000 UTC Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.516179 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:09 crc kubenswrapper[4995]: E0126 23:09:09.516330 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591815 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695475 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695539 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695598 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798656 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798677 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902226 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005225 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005314 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005330 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108414 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212308 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315397 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315474 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315516 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418830 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418959 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418977 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.492916 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:54:40.565124931 +0000 UTC Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.516310 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.516362 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.516430 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:10 crc kubenswrapper[4995]: E0126 23:09:10.516427 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:10 crc kubenswrapper[4995]: E0126 23:09:10.516533 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:10 crc kubenswrapper[4995]: E0126 23:09:10.516587 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.523917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.523982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.524001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.524027 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.524046 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625811 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625870 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625912 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625931 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728756 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728833 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832296 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832353 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935359 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935377 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039361 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039406 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039434 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039449 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143238 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143803 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.144069 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247476 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247527 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247545 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247564 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247577 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351087 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351119 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351136 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351148 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454366 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454388 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454407 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.493956 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:04:12.398807238 +0000 UTC Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.516341 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:11 crc kubenswrapper[4995]: E0126 23:09:11.516523 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.517856 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.537026 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.554872 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557612 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557718 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557730 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.568317 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.582777 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.599460 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.612778 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.637533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.652704 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661300 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661311 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661339 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.675613 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.691090 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.709966 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.725042 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.740766 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.754653 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763204 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763281 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.767056 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.779749 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.806449 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.820975 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.832981 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.864996 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865039 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865067 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865079 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.869157 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.872148 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.872542 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.904131 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.924367 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.943647 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.956438 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967068 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967077 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967126 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.972907 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.987802 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.010666 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.024517 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.036349 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.051261 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.062401 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069385 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069430 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.074199 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.084398 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.096348 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.119879 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.133896 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.143356 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.155774 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171669 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171703 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171715 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.273953 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274004 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274044 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.375916 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.375976 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.375994 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.376023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.376041 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478561 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.494405 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:01:46.93306268 +0000 UTC Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.517095 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.517089 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.517259 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.517403 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.517558 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.517675 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582496 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582659 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.684961 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.684997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.685007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.685024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.685035 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788429 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788454 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788505 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.878465 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.879487 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.881691 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" exitCode=1 Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.881746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.881837 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.882375 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.882577 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.891878 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.891946 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.891969 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.892000 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.892022 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.905981 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.929787 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.951921 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.978955 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995295 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995363 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995384 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995430 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995487 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.011625 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.024359 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.036295 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.045932 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.056830 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.068396 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.079017 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.090351 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097910 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097918 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097935 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097946 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.115367 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.130215 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.145782 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.157579 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.166936 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200743 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200759 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200770 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303150 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303217 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406910 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406925 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.495524 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:46:50.967939099 +0000 UTC Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509816 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.516619 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.516774 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611840 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611859 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715221 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715362 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715395 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715416 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765722 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765852 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765878 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.789180 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793426 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.808696 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813124 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813154 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813171 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813185 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.828977 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832433 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832464 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832496 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.849229 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853497 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853521 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853530 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853543 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853553 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.870714 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.870872 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872465 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872495 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872521 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872532 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.886551 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.890624 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.890807 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.903252 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.919828 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.932958 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.947494 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.960790 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.971776 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.974995 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975057 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975086 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.000395 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.017908 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.032493 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.045887 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.061500 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.075143 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077264 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077307 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.093745 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.108121 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.120648 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.133615 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.156756 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.171088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180211 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180223 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180259 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283737 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283820 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283860 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387072 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387180 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387192 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490311 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490376 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490445 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.496532 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:01:38.047446398 +0000 UTC Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.516971 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.517017 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:14 crc kubenswrapper[4995]: E0126 23:09:14.517165 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.517213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:14 crc kubenswrapper[4995]: E0126 23:09:14.517379 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:14 crc kubenswrapper[4995]: E0126 23:09:14.517638 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593450 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696718 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696807 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799423 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799432 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902292 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902352 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902409 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005888 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005956 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005992 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109056 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109145 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109190 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109205 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211600 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211722 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314724 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314853 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314871 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418205 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418228 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.497678 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:56:27.875894447 +0000 UTC Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.517272 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:15 crc kubenswrapper[4995]: E0126 23:09:15.517440 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523630 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523701 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523761 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626655 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626674 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626690 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729583 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729605 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833288 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833333 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.935988 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936059 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936082 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936174 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039563 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039610 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142130 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142213 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142265 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245181 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245222 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348864 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348906 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451267 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451335 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.498168 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:18:40.145091895 +0000 UTC Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.517017 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.517068 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:16 crc kubenswrapper[4995]: E0126 23:09:16.517300 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.517347 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:16 crc kubenswrapper[4995]: E0126 23:09:16.517527 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:16 crc kubenswrapper[4995]: E0126 23:09:16.517665 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.538291 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556663 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556701 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556714 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.560236 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.572929 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.584948 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.597199 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.610031 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.620619 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.637491 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.651127 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658796 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658886 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.663533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.678168 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.688685 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.699349 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.714903 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.727145 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.749094 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762335 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762405 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762424 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762438 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762969 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.774058 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864293 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864355 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864394 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864409 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967807 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070292 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070344 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070417 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174584 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277858 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277895 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277910 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381118 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381133 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381167 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484221 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484234 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484243 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.499730 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:45:25.023815251 +0000 UTC Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.517299 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:17 crc kubenswrapper[4995]: E0126 23:09:17.517533 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592187 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592215 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592226 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.694998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695075 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695138 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695153 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798361 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798423 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798472 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798492 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901381 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901441 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901499 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003920 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003972 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003996 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106814 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106870 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106909 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106925 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210612 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210668 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210677 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313542 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313657 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313701 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416796 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416859 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.500376 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:19:02.893840933 +0000 UTC Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.516768 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.516777 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:18 crc kubenswrapper[4995]: E0126 23:09:18.516910 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.516941 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:18 crc kubenswrapper[4995]: E0126 23:09:18.517025 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:18 crc kubenswrapper[4995]: E0126 23:09:18.517173 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518587 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518612 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518620 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518630 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518638 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621206 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621286 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724252 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826397 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826407 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826422 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826433 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928719 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928732 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928761 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036087 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036372 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036541 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036661 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138672 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138752 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241633 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241666 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344313 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344338 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344373 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344393 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447000 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447060 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447122 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.501021 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:41:53.045482529 +0000 UTC Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.516400 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:19 crc kubenswrapper[4995]: E0126 23:09:19.516586 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549863 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549881 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549922 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.652702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653250 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653564 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757020 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757574 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757748 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757923 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860745 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860757 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962750 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962762 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962770 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065249 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065277 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168086 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168156 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168186 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272709 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272834 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375662 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375676 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375684 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478753 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478768 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478786 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478798 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.501333 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:46:08.189474616 +0000 UTC Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.516480 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.516542 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.516499 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:20 crc kubenswrapper[4995]: E0126 23:09:20.516616 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:20 crc kubenswrapper[4995]: E0126 23:09:20.516711 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:20 crc kubenswrapper[4995]: E0126 23:09:20.516790 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581278 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581853 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.582053 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684592 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684657 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684678 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684719 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787914 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787999 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891452 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891512 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891540 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995277 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995314 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097719 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097741 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097750 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.200969 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201426 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304339 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304384 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304424 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406223 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406239 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406250 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.501924 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:43:46.244924412 +0000 UTC Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508622 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508745 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.516785 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:21 crc kubenswrapper[4995]: E0126 23:09:21.516941 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610952 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610969 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.611002 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.712811 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713055 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713151 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713242 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713327 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815953 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815975 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815983 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917766 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917789 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020767 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020782 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020823 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.122928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123225 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123329 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123422 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123520 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226640 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329113 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431759 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431784 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.502341 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:34:43.870582123 +0000 UTC Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.516350 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.516372 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.516392 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:22 crc kubenswrapper[4995]: E0126 23:09:22.516907 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:22 crc kubenswrapper[4995]: E0126 23:09:22.516924 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:22 crc kubenswrapper[4995]: E0126 23:09:22.516950 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.534687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.534973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.535038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.535129 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.535213 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636883 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636931 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636941 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740213 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740634 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740709 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.848751 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849049 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849169 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849262 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849346 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952069 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952143 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952199 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054595 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054656 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054674 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054716 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157377 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157433 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157497 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157531 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.259557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.259891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.259980 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.260050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.260126 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362628 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362690 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362701 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.426553 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:23 crc kubenswrapper[4995]: E0126 23:09:23.426994 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:23 crc kubenswrapper[4995]: E0126 23:09:23.427231 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.427210879 +0000 UTC m=+99.591918344 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464900 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464936 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.503443 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:58:27.61206185 +0000 UTC Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.516766 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:23 crc kubenswrapper[4995]: E0126 23:09:23.516924 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567334 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567432 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567498 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669522 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669603 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669613 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772339 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772644 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772707 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772860 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875019 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875712 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875839 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.978955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979045 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979073 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979094 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.986917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.986975 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.986997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.987022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.987043 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.002307 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006196 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006230 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006242 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006251 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.017970 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022548 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.032897 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035881 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035921 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035935 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035943 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.046013 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049334 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049395 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049408 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049437 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.061603 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.061766 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081446 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183341 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183391 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183428 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285660 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387570 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387583 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490390 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490408 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490431 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490449 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.503754 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:35:06.619154878 +0000 UTC Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.517265 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.517275 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.517733 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.518082 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.518232 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.518218 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592582 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592649 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592720 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695950 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695959 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798193 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798205 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798223 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798234 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900434 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002942 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002950 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002965 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002975 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104545 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104605 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104614 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207601 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207648 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207657 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207685 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309767 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309810 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309883 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309915 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.411867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412469 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412495 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412513 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.504740 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:29:26.035287817 +0000 UTC Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514626 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514664 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.516822 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:25 crc kubenswrapper[4995]: E0126 23:09:25.516909 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617382 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617417 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617429 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617458 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720577 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720602 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720619 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823475 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823502 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823512 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925320 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925371 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925388 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925400 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928250 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/0.log" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928304 4995 generic.go:334] "Generic (PLEG): container finished" podID="4ba70657-ea12-4a85-9ec3-c1423b5b6912" containerID="cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81" exitCode=1 Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928333 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerDied","Data":"cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928753 4995 scope.go:117] "RemoveContainer" containerID="cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.943002 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.955779 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.975871 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.988701 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.002677 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.014712 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028523 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028552 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.029483 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.042521 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.054308 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.068132 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.078587 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.090049 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.107340 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.121377 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131461 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131537 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131550 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.132706 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.147333 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.162392 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.172650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233122 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233143 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233151 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336140 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336156 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438438 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438488 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438516 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438528 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.505478 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:11:14.884630087 +0000 UTC Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.516842 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.516875 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.516921 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:26 crc kubenswrapper[4995]: E0126 23:09:26.517045 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:26 crc kubenswrapper[4995]: E0126 23:09:26.517176 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:26 crc kubenswrapper[4995]: E0126 23:09:26.517295 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.528853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541194 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.543725 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.555016 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.566088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.577737 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.588523 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.599944 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.627406 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.640745 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643129 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643212 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.652010 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.662322 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.673008 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.683714 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.694273 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.706365 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.717182 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.730480 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745786 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745797 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.755893 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848505 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848514 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.933028 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/0.log" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.933310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.946895 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950037 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950051 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950086 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.956010 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.967455 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.990687 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.005964 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.018541 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.029698 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.038869 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.050479 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052048 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052072 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.063274 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.074726 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.083803 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.094004 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.113455 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.123940 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.136416 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.147573 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153707 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153741 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.161346 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256522 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256534 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256566 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358413 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358448 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358470 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358482 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460739 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460751 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460785 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.505631 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:44:45.649551966 +0000 UTC Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.516861 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:27 crc kubenswrapper[4995]: E0126 23:09:27.516982 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563436 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666487 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666520 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666530 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666555 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768926 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768938 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872244 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974114 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974126 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974140 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974151 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076641 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076709 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076720 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179467 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179500 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179512 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282150 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282225 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282252 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282262 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.384919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.384984 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.384996 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.385013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.385022 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.505975 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:15:18.815880118 +0000 UTC Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.516706 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.516743 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.516718 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:28 crc kubenswrapper[4995]: E0126 23:09:28.516893 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:28 crc kubenswrapper[4995]: E0126 23:09:28.516953 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:28 crc kubenswrapper[4995]: E0126 23:09:28.517023 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.527752 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591198 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591251 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695489 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798215 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798273 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798313 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900518 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900576 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007768 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007810 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007836 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007845 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110518 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110577 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110587 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213452 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213503 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213515 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213533 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213546 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315737 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315794 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315805 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420574 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420611 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420626 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.506151 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:25:48.610619021 +0000 UTC Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.516505 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:29 crc kubenswrapper[4995]: E0126 23:09:29.516934 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.517263 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:29 crc kubenswrapper[4995]: E0126 23:09:29.517457 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522650 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522676 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625320 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625355 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625376 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729569 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729581 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832248 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832313 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832333 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832344 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934224 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934242 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934254 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.035960 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036036 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036048 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139231 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139257 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139309 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.242851 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.242947 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.242966 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.243022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.243045 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345855 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345909 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345959 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448119 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448159 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.506719 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:35:36.666724779 +0000 UTC Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.519233 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.519653 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:30 crc kubenswrapper[4995]: E0126 23:09:30.519794 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.519995 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:30 crc kubenswrapper[4995]: E0126 23:09:30.520238 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:30 crc kubenswrapper[4995]: E0126 23:09:30.520400 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551259 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551300 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654249 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654285 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757711 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861092 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861167 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861186 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963823 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963901 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963924 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963958 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963981 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066756 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066768 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169667 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169684 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272865 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272890 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272906 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375435 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375501 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375513 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375542 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478492 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478645 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478712 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.507805 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:27:17.673384107 +0000 UTC Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.517240 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:31 crc kubenswrapper[4995]: E0126 23:09:31.517403 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581092 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581137 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581150 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581175 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.682970 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683004 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683027 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683036 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785309 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785573 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785868 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889053 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889115 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889125 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889138 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889148 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991649 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991678 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094053 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094297 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094312 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094324 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196417 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196510 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196548 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196581 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299305 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299330 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299349 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403882 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403906 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403919 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507279 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507296 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507332 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.508275 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:31:03.902238588 +0000 UTC Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.517308 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.517345 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:32 crc kubenswrapper[4995]: E0126 23:09:32.517445 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.517466 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:32 crc kubenswrapper[4995]: E0126 23:09:32.517618 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:32 crc kubenswrapper[4995]: E0126 23:09:32.517691 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609268 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609279 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.711565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.711822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.711907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.712058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.712166 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815296 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815331 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815341 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815354 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815363 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919338 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919380 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022159 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022673 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022862 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.023035 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.125985 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126052 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126082 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228586 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228594 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228608 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228617 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331230 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331246 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433946 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.509191 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:35:43.173393138 +0000 UTC Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.516663 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:33 crc kubenswrapper[4995]: E0126 23:09:33.517030 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536950 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640190 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640249 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640309 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743744 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743889 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846906 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846939 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949458 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949488 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949510 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949519 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.052457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.052896 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.053401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.053842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.054270 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157542 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157655 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260438 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260452 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.369998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370094 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370155 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370178 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.434984 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435042 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435059 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435130 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.455896 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461313 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461950 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.482426 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487545 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487637 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487688 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.510661 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:37:55.016346118 +0000 UTC Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.511084 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515877 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515892 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515916 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515932 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.516473 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.516627 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.516521 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.517003 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.517189 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.517363 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.531985 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536690 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536731 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.551877 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.552120 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553536 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553584 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656750 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656759 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656771 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656789 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.759353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.759792 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.760166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.760465 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.760751 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863596 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863678 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863691 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966251 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966295 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966309 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966318 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069593 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173177 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173207 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275808 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275917 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377690 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377725 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377737 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480053 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480211 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480267 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.511559 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:21:35.695350027 +0000 UTC Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.516914 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:35 crc kubenswrapper[4995]: E0126 23:09:35.517045 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583883 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583895 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583923 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687019 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687069 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687085 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687093 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789871 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789933 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789959 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892854 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892872 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892885 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995247 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995314 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995322 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097234 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097276 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.200457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205416 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205673 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308934 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308945 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411863 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411900 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411912 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411940 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.512453 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:34:52.220789802 +0000 UTC Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515159 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515170 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515201 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.516378 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.516484 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:36 crc kubenswrapper[4995]: E0126 23:09:36.516665 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.516721 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:36 crc kubenswrapper[4995]: E0126 23:09:36.517062 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:36 crc kubenswrapper[4995]: E0126 23:09:36.517256 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.541351 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.553670 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a724234-c0d3-4f4d-8995-9c26af415bae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a55e11716925cce81c41c9f11fb000386beeb8b70e04254b605df03a4203004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4968022ac9ab52cfea33d3fccf8e070660139e224bba28dc4ade8a43c05bf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4968022ac9ab52cfea33d3fccf8e070660139e224bba28dc4ade8a43c05bf46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.566819 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.585267 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.601681 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.616799 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618114 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618127 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.627050 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.638989 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.653317 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.668462 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.683040 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.702735 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.716965 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720553 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720663 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720676 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.733905 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.745564 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.755945 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.772721 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.790321 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.808054 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824587 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824873 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927248 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927307 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927324 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927335 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.029962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030018 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030029 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030057 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132747 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132755 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132778 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235128 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235173 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235189 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235225 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338625 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338641 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338687 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440578 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440601 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440618 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.513477 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:57:23.941395266 +0000 UTC Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.516801 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:37 crc kubenswrapper[4995]: E0126 23:09:37.516936 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542812 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646093 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646142 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748839 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748847 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748864 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748874 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851860 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851881 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851916 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851933 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954622 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954689 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057175 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057234 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057261 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057270 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160502 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160583 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160624 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263732 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263746 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263765 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263778 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366283 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366293 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366307 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366317 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.469890 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.469964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.469988 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.470015 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.470034 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.513783 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:21:49.200370145 +0000 UTC Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.517119 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.517130 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:38 crc kubenswrapper[4995]: E0126 23:09:38.517272 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.517309 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:38 crc kubenswrapper[4995]: E0126 23:09:38.517426 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:38 crc kubenswrapper[4995]: E0126 23:09:38.517483 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573439 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573452 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573469 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573481 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675791 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675862 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675903 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675924 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779429 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779507 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779555 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883649 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883701 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986085 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986136 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986156 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089503 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089515 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089532 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089543 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192405 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192462 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192491 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295229 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295252 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295270 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398359 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398376 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398428 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500733 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500748 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500766 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500780 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.514335 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:37:31.915524318 +0000 UTC Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.516679 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:39 crc kubenswrapper[4995]: E0126 23:09:39.516866 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602437 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602448 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602465 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602476 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704590 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704669 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704688 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704715 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704733 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807827 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807868 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807889 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.910915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911127 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.013922 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.013991 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.014013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.014044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.014069 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117008 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117065 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117132 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117149 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220416 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220436 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.322990 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323055 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323085 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.408316 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.408608 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.408579841 +0000 UTC m=+148.573287336 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.426955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427032 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427073 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427092 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510321 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510410 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510437 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510468 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510546 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510605 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510616 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510630 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510678 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510708 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510623 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510731 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510631 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510612217 +0000 UTC m=+148.675319682 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510788 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510769761 +0000 UTC m=+148.675477306 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510794801 +0000 UTC m=+148.675502266 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510840 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510831502 +0000 UTC m=+148.675539067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.514685 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:10:32.493000462 +0000 UTC Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.517005 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.517075 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.517186 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.517242 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.517295 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.517381 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529751 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529849 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633140 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633211 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633222 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736355 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736433 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736445 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736462 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736472 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838689 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838712 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838721 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940843 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940911 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042835 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042924 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145435 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145462 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248180 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248239 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351389 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351404 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351430 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351448 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453162 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453201 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453224 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.515710 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:12:08.949807294 +0000 UTC Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.516533 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:41 crc kubenswrapper[4995]: E0126 23:09:41.516676 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555236 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658434 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658449 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658498 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760855 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760878 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760889 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.863566 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864032 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864043 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966124 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966133 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966158 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068069 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068121 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068132 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068158 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171145 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171162 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171189 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171206 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.273884 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.273983 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.274007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.274037 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.274064 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375866 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375910 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375921 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375937 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.478950 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479021 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479054 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479067 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516546 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:31:05.972224646 +0000 UTC Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516672 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516711 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516739 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:42 crc kubenswrapper[4995]: E0126 23:09:42.524133 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:42 crc kubenswrapper[4995]: E0126 23:09:42.524830 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:42 crc kubenswrapper[4995]: E0126 23:09:42.525027 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.525927 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581138 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581182 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581206 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.683944 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.683983 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.683994 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.684012 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.684024 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.785964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786344 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786389 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889282 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889308 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889317 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.979321 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.980970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.981883 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991335 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991360 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991381 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991389 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.022446 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podStartSLOduration=66.022428581 podStartE2EDuration="1m6.022428581s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:42.999986576 +0000 UTC m=+87.164694041" watchObservedRunningTime="2026-01-26 23:09:43.022428581 +0000 UTC m=+87.187136046" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.022570 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podStartSLOduration=66.022566074 podStartE2EDuration="1m6.022566074s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.022278528 +0000 UTC m=+87.186986013" watchObservedRunningTime="2026-01-26 23:09:43.022566074 +0000 UTC m=+87.187273539" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.034941 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.034922674 podStartE2EDuration="15.034922674s" podCreationTimestamp="2026-01-26 23:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.034042013 +0000 UTC m=+87.198749498" watchObservedRunningTime="2026-01-26 23:09:43.034922674 +0000 UTC m=+87.199630139" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.064973 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.064952523 podStartE2EDuration="40.064952523s" podCreationTimestamp="2026-01-26 23:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.049847027 +0000 UTC m=+87.214554512" watchObservedRunningTime="2026-01-26 23:09:43.064952523 +0000 UTC m=+87.229659988" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093721 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093729 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093752 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.104162 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-m8zlz" podStartSLOduration=66.104143054 podStartE2EDuration="1m6.104143054s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.103180821 +0000 UTC m=+87.267888296" watchObservedRunningTime="2026-01-26 23:09:43.104143054 +0000 UTC m=+87.268850519" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.140266 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.140242701 podStartE2EDuration="1m7.140242701s" podCreationTimestamp="2026-01-26 23:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.126021376 +0000 UTC m=+87.290728851" watchObservedRunningTime="2026-01-26 23:09:43.140242701 +0000 UTC m=+87.304950166" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.153187 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" podStartSLOduration=66.153169054 podStartE2EDuration="1m6.153169054s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.153019611 +0000 UTC m=+87.317727066" watchObservedRunningTime="2026-01-26 23:09:43.153169054 +0000 UTC m=+87.317876519" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.153715 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xltwc" podStartSLOduration=66.153708918 podStartE2EDuration="1m6.153708918s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.140725002 +0000 UTC m=+87.305432467" watchObservedRunningTime="2026-01-26 23:09:43.153708918 +0000 UTC m=+87.318416383" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.192148 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=65.19213061 podStartE2EDuration="1m5.19213061s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.174703377 +0000 UTC m=+87.339410862" watchObservedRunningTime="2026-01-26 23:09:43.19213061 +0000 UTC m=+87.356838075" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.192377 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=63.192372046 podStartE2EDuration="1m3.192372046s" podCreationTimestamp="2026-01-26 23:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.191147976 +0000 UTC m=+87.355855441" watchObservedRunningTime="2026-01-26 23:09:43.192372046 +0000 UTC m=+87.357079511" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195608 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195696 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298648 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298734 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298838 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298924 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.301485 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vlmfg"] Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.301594 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:43 crc kubenswrapper[4995]: E0126 23:09:43.301686 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.311182 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pkt82" podStartSLOduration=66.311165809 podStartE2EDuration="1m6.311165809s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.310581345 +0000 UTC m=+87.475288810" watchObservedRunningTime="2026-01-26 23:09:43.311165809 +0000 UTC m=+87.475873284" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.400997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401387 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.502973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503008 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503018 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503042 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.518520 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:46:02.850305499 +0000 UTC Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605217 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605240 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605250 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707441 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707493 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707511 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707525 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809570 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911756 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911832 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911841 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013764 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013809 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013820 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117247 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117339 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.219995 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220057 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220068 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322604 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322651 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425461 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425522 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425536 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425572 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.516974 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.517048 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.516976 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:44 crc kubenswrapper[4995]: E0126 23:09:44.517132 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:44 crc kubenswrapper[4995]: E0126 23:09:44.517211 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:44 crc kubenswrapper[4995]: E0126 23:09:44.517390 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.518969 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:32:54.340777943 +0000 UTC Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528453 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528461 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528509 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630854 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630960 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660739 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660789 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660827 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.706540 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hln88" podStartSLOduration=67.706521806 podStartE2EDuration="1m7.706521806s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.323700423 +0000 UTC m=+87.488407898" watchObservedRunningTime="2026-01-26 23:09:44.706521806 +0000 UTC m=+88.871229281" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.706751 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc"] Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.707195 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.711204 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.712362 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.712374 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.713755 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.751709 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.751963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.752157 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d797ab32-8a7c-4f54-be9b-26cdab54574d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.752310 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d797ab32-8a7c-4f54-be9b-26cdab54574d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.752466 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d797ab32-8a7c-4f54-be9b-26cdab54574d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853795 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853826 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d797ab32-8a7c-4f54-be9b-26cdab54574d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853883 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d797ab32-8a7c-4f54-be9b-26cdab54574d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853901 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853921 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d797ab32-8a7c-4f54-be9b-26cdab54574d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.854195 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.855831 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d797ab32-8a7c-4f54-be9b-26cdab54574d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.863065 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d797ab32-8a7c-4f54-be9b-26cdab54574d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.889867 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d797ab32-8a7c-4f54-be9b-26cdab54574d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.022722 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.516585 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:45 crc kubenswrapper[4995]: E0126 23:09:45.516891 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.519840 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:35:28.932084443 +0000 UTC Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.519924 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.526927 4995 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.759198 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.759406 4995 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.803950 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.804489 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.807299 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.807979 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.808365 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.813580 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.813877 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.814395 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.815929 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.816227 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.816441 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.816701 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.818369 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.818594 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.818895 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819062 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819133 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819074 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819229 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819298 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819601 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.826088 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.827260 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.829972 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.830634 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.831242 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klb9g"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.832020 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.832615 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833834 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833930 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.834016 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833969 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833967 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.834241 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.834552 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.837490 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.838291 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.839871 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.840270 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.840546 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.840838 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.841254 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.842674 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.845095 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-pfw4t"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.852817 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.856151 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kwqrx"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.856794 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.856885 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.858061 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.858222 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.863511 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.864056 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.865202 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw55h"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.865824 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866145 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4r5mm"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866281 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-auth-proxy-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866314 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866342 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mx2\" (UniqueName: \"kubernetes.io/projected/1d547650-1fdd-4334-9376-5f5b165d5069-kube-api-access-h2mx2\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866372 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866414 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d547650-1fdd-4334-9376-5f5b165d5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866469 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntsd\" (UniqueName: \"kubernetes.io/projected/492ea284-e9af-45ce-ac55-c5d8168be715-kube-api-access-kntsd\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866490 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-images\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866531 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866553 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866652 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-config\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866714 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866747 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866786 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqgv\" (UniqueName: \"kubernetes.io/projected/b345a51c-ec48-4066-a49b-713e73429c2d-kube-api-access-4cqgv\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866823 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d547650-1fdd-4334-9376-5f5b165d5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866858 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-encryption-config\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866893 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9dc5\" (UniqueName: \"kubernetes.io/projected/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-kube-api-access-s9dc5\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866951 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-dir\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866993 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867033 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b345a51c-ec48-4066-a49b-713e73429c2d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867059 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djz7g\" (UniqueName: \"kubernetes.io/projected/d8feb049-3911-43fa-bd25-6ecee076d1ed-kube-api-access-djz7g\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867086 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867409 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867491 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867596 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867684 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867708 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-client\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867738 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867902 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868031 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868036 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868212 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868242 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-policies\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868263 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868284 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868333 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868360 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868383 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492ea284-e9af-45ce-ac55-c5d8168be715-serving-cert\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868408 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868385 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868433 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxvp\" (UniqueName: \"kubernetes.io/projected/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-kube-api-access-blxvp\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868483 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5qk\" (UniqueName: \"kubernetes.io/projected/ce7a362e-896b-4492-ac2c-08bd19bba7b4-kube-api-access-kc5qk\") pod \"downloads-7954f5f757-pfw4t\" (UID: \"ce7a362e-896b-4492-ac2c-08bd19bba7b4\") " pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868516 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-service-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868423 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868538 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8feb049-3911-43fa-bd25-6ecee076d1ed-machine-approver-tls\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868551 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868560 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-serving-cert\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868589 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-config\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868657 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868968 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.869178 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.869197 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.869336 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.870145 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.870644 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.871412 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.871967 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.872400 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh55c"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.873008 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.877715 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879002 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jr8qp"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879372 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v665q"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879760 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879846 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.880220 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.882590 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.882881 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.882919 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.883415 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.883704 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.883746 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.884651 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.886981 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.887413 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.899462 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.900718 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.902606 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.902976 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903332 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903381 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903578 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903620 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903661 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903742 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.904522 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.905947 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.909811 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.916315 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.916526 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.917706 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.917944 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.918472 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.918666 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.918858 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919010 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919181 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919344 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919525 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919682 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919796 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919901 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920070 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920212 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920316 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920476 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920672 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.923373 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.923580 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926206 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926442 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926511 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926688 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935732 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935834 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935904 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935996 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936061 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936213 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936592 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936654 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936687 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936814 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936597 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.938273 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.941637 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.942366 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.944735 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.944964 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.945562 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.945644 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.948235 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.949536 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.949719 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.949785 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.950056 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.950470 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.950825 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.951417 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tw45t"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.952084 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.952274 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-crsqt"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.952769 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.954709 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.955724 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.956224 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.956530 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.957606 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.958381 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.961224 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.961536 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.961880 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.969792 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.969869 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970617 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-images\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970728 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970746 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-config\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970771 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-image-import-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970793 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970816 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970834 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970861 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970908 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqgv\" (UniqueName: \"kubernetes.io/projected/b345a51c-ec48-4066-a49b-713e73429c2d-kube-api-access-4cqgv\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970943 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d547650-1fdd-4334-9376-5f5b165d5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/321948cb-6f71-4375-b575-ee960cd49bc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970996 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971019 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/053917dd-5476-46d8-b9d4-2a1433d86697-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971044 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9dc5\" (UniqueName: \"kubernetes.io/projected/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-kube-api-access-s9dc5\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-serving-cert\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971072 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971085 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-default-certificate\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971128 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-encryption-config\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971166 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-dir\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971187 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sgtz\" (UniqueName: \"kubernetes.io/projected/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-kube-api-access-5sgtz\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971216 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-serving-cert\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rcc\" (UniqueName: \"kubernetes.io/projected/053917dd-5476-46d8-b9d4-2a1433d86697-kube-api-access-24rcc\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971293 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971314 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b345a51c-ec48-4066-a49b-713e73429c2d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971339 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djz7g\" (UniqueName: \"kubernetes.io/projected/d8feb049-3911-43fa-bd25-6ecee076d1ed-kube-api-access-djz7g\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971361 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit-dir\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971387 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971410 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e85666ee-5696-465c-9682-802e968660ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971431 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfxs\" (UniqueName: \"kubernetes.io/projected/dedff685-1753-453d-a4ec-4e48b74cfdc4-kube-api-access-8tfxs\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971457 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96211e14-9e17-4511-8523-609ff907f5c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971496 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96211e14-9e17-4511-8523-609ff907f5c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971520 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971537 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-client\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971604 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e85666ee-5696-465c-9682-802e968660ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971627 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96211e14-9e17-4511-8523-609ff907f5c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971643 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-encryption-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971663 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971682 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-service-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971702 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krr67\" (UniqueName: \"kubernetes.io/projected/6ff36f00-70ac-4a9c-96f6-ade70040b187-kube-api-access-krr67\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-stats-auth\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971740 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/321948cb-6f71-4375-b575-ee960cd49bc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971760 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971762 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971782 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-client\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971799 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-serving-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971823 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971844 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-client\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971874 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfbq\" (UniqueName: \"kubernetes.io/projected/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-kube-api-access-fzfbq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971900 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971919 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971941 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971964 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-policies\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971994 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-metrics-certs\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972062 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972119 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972139 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972161 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492ea284-e9af-45ce-ac55-c5d8168be715-serving-cert\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972184 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dedff685-1753-453d-a4ec-4e48b74cfdc4-proxy-tls\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972206 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxvp\" (UniqueName: \"kubernetes.io/projected/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-kube-api-access-blxvp\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972228 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/321948cb-6f71-4375-b575-ee960cd49bc2-kube-api-access-276q6\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972248 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7bb\" (UniqueName: \"kubernetes.io/projected/4b695371-523f-41fd-a8de-6bbc9ce319e0-kube-api-access-4d7bb\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972270 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-images\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972290 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dc4d5e-e13d-4d4d-b1f8-390149f24544-service-ca-bundle\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972322 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972346 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5qk\" (UniqueName: \"kubernetes.io/projected/ce7a362e-896b-4492-ac2c-08bd19bba7b4-kube-api-access-kc5qk\") pod \"downloads-7954f5f757-pfw4t\" (UID: \"ce7a362e-896b-4492-ac2c-08bd19bba7b4\") " pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972370 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5cvn\" (UniqueName: \"kubernetes.io/projected/24dc4d5e-e13d-4d4d-b1f8-390149f24544-kube-api-access-v5cvn\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972404 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-trusted-ca\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972425 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-service-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972448 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-node-pullsecrets\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972739 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972763 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8feb049-3911-43fa-bd25-6ecee076d1ed-machine-approver-tls\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972780 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972805 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-serving-cert\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wk6w\" (UniqueName: \"kubernetes.io/projected/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-kube-api-access-2wk6w\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-config\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972894 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972925 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85666ee-5696-465c-9682-802e968660ec-config\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972959 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mx2\" (UniqueName: \"kubernetes.io/projected/1d547650-1fdd-4334-9376-5f5b165d5069-kube-api-access-h2mx2\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972990 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-auth-proxy-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973012 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-config\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973057 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973078 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-config\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973198 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-config\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973270 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973767 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-policies\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973823 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-dir\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973927 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974176 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-service-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974428 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974572 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-config\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972414 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-images\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974623 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-encryption-config\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972543 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d547650-1fdd-4334-9376-5f5b165d5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.975415 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-auth-proxy-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.976555 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977044 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-serving-cert\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977774 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b695371-523f-41fd-a8de-6bbc9ce319e0-serving-cert\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977869 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntsd\" (UniqueName: \"kubernetes.io/projected/492ea284-e9af-45ce-ac55-c5d8168be715-kube-api-access-kntsd\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977897 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/053917dd-5476-46d8-b9d4-2a1433d86697-proxy-tls\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.978312 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d547650-1fdd-4334-9376-5f5b165d5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.979217 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8feb049-3911-43fa-bd25-6ecee076d1ed-machine-approver-tls\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.979531 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.980233 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.981786 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.981993 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.982395 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.982869 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.982522 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.983475 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.984778 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.985705 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.985799 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d547650-1fdd-4334-9376-5f5b165d5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.986817 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.986822 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x9shl"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.987758 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.988163 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.988526 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-client\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.989633 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.991005 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.991504 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.992481 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.993183 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klb9g"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.993816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b345a51c-ec48-4066-a49b-713e73429c2d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.997317 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z4xpf"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.001925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492ea284-e9af-45ce-ac55-c5d8168be715-serving-cert\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.006609 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.008283 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k4xnx"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009039 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" event={"ID":"d797ab32-8a7c-4f54-be9b-26cdab54574d","Type":"ContainerStarted","Data":"4269eea915fafc6279ea18bdde0ab6bd1012e75f2790e7b038990f724838def5"} Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009079 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" event={"ID":"d797ab32-8a7c-4f54-be9b-26cdab54574d","Type":"ContainerStarted","Data":"fb055c7ffd385417b2cf1e93558d6497aa49c8b59d684d277165e43964cc04a5"} Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009245 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009541 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009667 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pfw4t"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009704 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kwqrx"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009718 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh55c"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009730 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009744 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4r5mm"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009758 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tsdjk"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010322 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010351 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010429 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010681 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw55h"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.011793 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.013199 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jr8qp"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.014640 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.016073 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.017455 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.018520 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.019483 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.020486 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.021686 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wt84d"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.022502 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.022600 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8m6w4"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.023233 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.023855 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.025282 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.026225 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.026562 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-crsqt"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.027687 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wt84d"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.028838 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.030219 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x9shl"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.031406 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v665q"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.032768 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.034135 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.036819 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.036852 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.037221 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z4xpf"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.038364 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k4xnx"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.039408 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.040381 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8m6w4"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.041519 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.042442 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.046637 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.066329 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dedff685-1753-453d-a4ec-4e48b74cfdc4-proxy-tls\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079383 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/321948cb-6f71-4375-b575-ee960cd49bc2-kube-api-access-276q6\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079401 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7bb\" (UniqueName: \"kubernetes.io/projected/4b695371-523f-41fd-a8de-6bbc9ce319e0-kube-api-access-4d7bb\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079418 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-images\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079433 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dc4d5e-e13d-4d4d-b1f8-390149f24544-service-ca-bundle\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079454 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5cvn\" (UniqueName: \"kubernetes.io/projected/24dc4d5e-e13d-4d4d-b1f8-390149f24544-kube-api-access-v5cvn\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079478 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-trusted-ca\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079747 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-node-pullsecrets\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079770 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079826 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-node-pullsecrets\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080583 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-trusted-ca\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080688 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080743 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080769 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wk6w\" (UniqueName: \"kubernetes.io/projected/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-kube-api-access-2wk6w\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080788 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080806 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85666ee-5696-465c-9682-802e968660ec-config\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080829 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-config\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080859 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-config\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080906 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b695371-523f-41fd-a8de-6bbc9ce319e0-serving-cert\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080931 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/053917dd-5476-46d8-b9d4-2a1433d86697-proxy-tls\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080968 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-image-import-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080985 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081000 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081033 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/321948cb-6f71-4375-b575-ee960cd49bc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081048 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/053917dd-5476-46d8-b9d4-2a1433d86697-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081442 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-config\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081748 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/321948cb-6f71-4375-b575-ee960cd49bc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082358 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082400 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-serving-cert\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082436 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-default-certificate\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082465 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sgtz\" (UniqueName: \"kubernetes.io/projected/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-kube-api-access-5sgtz\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082480 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-serving-cert\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082495 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rcc\" (UniqueName: \"kubernetes.io/projected/053917dd-5476-46d8-b9d4-2a1433d86697-kube-api-access-24rcc\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082520 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit-dir\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082534 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082548 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e85666ee-5696-465c-9682-802e968660ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082564 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfxs\" (UniqueName: \"kubernetes.io/projected/dedff685-1753-453d-a4ec-4e48b74cfdc4-kube-api-access-8tfxs\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082580 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96211e14-9e17-4511-8523-609ff907f5c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082595 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96211e14-9e17-4511-8523-609ff907f5c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082611 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082625 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082639 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082653 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-client\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082667 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e85666ee-5696-465c-9682-802e968660ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082682 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96211e14-9e17-4511-8523-609ff907f5c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082696 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-encryption-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082710 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082723 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-service-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krr67\" (UniqueName: \"kubernetes.io/projected/6ff36f00-70ac-4a9c-96f6-ade70040b187-kube-api-access-krr67\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082752 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-stats-auth\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/321948cb-6f71-4375-b575-ee960cd49bc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082783 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082798 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-client\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082814 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-serving-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082829 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/053917dd-5476-46d8-b9d4-2a1433d86697-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082832 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfbq\" (UniqueName: \"kubernetes.io/projected/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-kube-api-access-fzfbq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-metrics-certs\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082901 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082368 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-image-import-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.083538 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.088990 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit-dir\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.089817 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.090435 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b695371-523f-41fd-a8de-6bbc9ce319e0-serving-cert\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.090810 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-serving-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.090915 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.091233 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-encryption-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.091476 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-serving-cert\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.091897 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.092426 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.092960 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-client\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.093685 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.093866 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.095452 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.096185 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-serving-cert\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.098684 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-client\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.100647 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/321948cb-6f71-4375-b575-ee960cd49bc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.101837 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-config\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.106772 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.113567 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.126619 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.136831 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-service-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.154093 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.166967 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.187253 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.206667 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.233208 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.245888 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.266364 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.286891 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.307701 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.320484 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96211e14-9e17-4511-8523-609ff907f5c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.327220 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.346623 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.355872 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96211e14-9e17-4511-8523-609ff907f5c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.367115 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.387371 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.400751 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e85666ee-5696-465c-9682-802e968660ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.407185 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.411781 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85666ee-5696-465c-9682-802e968660ec-config\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.427980 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.448486 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.467120 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.487610 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.508809 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.520025 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.520608 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.520636 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.527809 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.535396 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/053917dd-5476-46d8-b9d4-2a1433d86697-proxy-tls\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.547067 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.587233 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.591711 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-images\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.607181 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.627822 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.632690 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dedff685-1753-453d-a4ec-4e48b74cfdc4-proxy-tls\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.646170 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.666782 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.686681 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.708249 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.726711 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.736898 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-default-certificate\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.747554 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.766994 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.781653 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-stats-auth\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.787591 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.798252 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-metrics-certs\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.806791 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.827714 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.830654 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dc4d5e-e13d-4d4d-b1f8-390149f24544-service-ca-bundle\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.847196 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.868448 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.888082 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.900600 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.908072 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.927413 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.934725 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.947182 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.965304 4995 request.go:700] Waited for 1.008515668s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.967313 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.987689 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.006774 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.026777 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.047522 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.067782 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.086534 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.127348 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.147058 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.194485 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqgv\" (UniqueName: \"kubernetes.io/projected/b345a51c-ec48-4066-a49b-713e73429c2d-kube-api-access-4cqgv\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.201372 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.225442 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9dc5\" (UniqueName: \"kubernetes.io/projected/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-kube-api-access-s9dc5\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.243505 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.260116 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mx2\" (UniqueName: \"kubernetes.io/projected/1d547650-1fdd-4334-9376-5f5b165d5069-kube-api-access-h2mx2\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.282530 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djz7g\" (UniqueName: \"kubernetes.io/projected/d8feb049-3911-43fa-bd25-6ecee076d1ed-kube-api-access-djz7g\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.301311 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxvp\" (UniqueName: \"kubernetes.io/projected/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-kube-api-access-blxvp\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.320334 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5qk\" (UniqueName: \"kubernetes.io/projected/ce7a362e-896b-4492-ac2c-08bd19bba7b4-kube-api-access-kc5qk\") pod \"downloads-7954f5f757-pfw4t\" (UID: \"ce7a362e-896b-4492-ac2c-08bd19bba7b4\") " pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.340180 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntsd\" (UniqueName: \"kubernetes.io/projected/492ea284-e9af-45ce-ac55-c5d8168be715-kube-api-access-kntsd\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.346297 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.360241 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.366419 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.386688 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.387460 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.407163 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.408487 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.428459 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.438297 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.447393 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.447529 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.458384 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.467935 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.488624 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.507645 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.513811 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.516320 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.518783 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.522569 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.527003 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.549375 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.568298 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.591871 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.606667 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.637171 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.646920 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.667509 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.689433 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.707674 4995 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.727216 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.738121 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc"] Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.744890 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d547650_1fdd_4334_9376_5f5b165d5069.slice/crio-4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1 WatchSource:0}: Error finding container 4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1: Status 404 returned error can't find the container with id 4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1 Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.747530 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.768639 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.768891 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.786625 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.806511 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.806894 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.813787 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klb9g"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.826989 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.838402 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ad869c_a391_4d0b_99fa_74e9d7ef4e87.slice/crio-a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a WatchSource:0}: Error finding container a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a: Status 404 returned error can't find the container with id a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.846861 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.868881 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.887903 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.900619 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.902404 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.906763 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.927227 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.937766 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb6bf0f_13dc_4a58_853b_98c00142f0bb.slice/crio-8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad WatchSource:0}: Error finding container 8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad: Status 404 returned error can't find the container with id 8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.938369 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5c78ad_3088_4100_90ac_f863bb21e4a2.slice/crio-d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685 WatchSource:0}: Error finding container d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685: Status 404 returned error can't find the container with id d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685 Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.948886 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.967138 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.986869 4995 request.go:700] Waited for 1.963380855s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.989094 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.006712 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pfw4t"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.006989 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.009198 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kwqrx"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.013618 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerStarted","Data":"d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.014656 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" event={"ID":"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26","Type":"ContainerStarted","Data":"523a5f1e7efbb572c3d937c5e62a145ddeafb9c6b41005eba4d11a3100ac9f14"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.015903 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" event={"ID":"b345a51c-ec48-4066-a49b-713e73429c2d","Type":"ContainerStarted","Data":"45b75743cba30f2c8a78a317ddd77768854d359302b222c39b9b3a2bd78be747"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.017113 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" event={"ID":"1d547650-1fdd-4334-9376-5f5b165d5069","Type":"ContainerStarted","Data":"5f3ce2ef41a53c46bf833e6429b57d5cf13622c47fc6a7e5f62273de632158a4"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.017141 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" event={"ID":"1d547650-1fdd-4334-9376-5f5b165d5069","Type":"ContainerStarted","Data":"4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.019383 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" event={"ID":"49ad869c-a391-4d0b-99fa-74e9d7ef4e87","Type":"ContainerStarted","Data":"a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.028038 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" event={"ID":"d8feb049-3911-43fa-bd25-6ecee076d1ed","Type":"ContainerStarted","Data":"86eb06a8e536fbfcd920efd785e31b9b8ba36ecad3469b48a509c15e56c657b0"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.028075 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" event={"ID":"d8feb049-3911-43fa-bd25-6ecee076d1ed","Type":"ContainerStarted","Data":"2546af6483ff28b3428fc94911d2fddd4c2eeab77073ad65fa5554a13e61af3b"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.028336 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.032014 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerStarted","Data":"8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad"} Jan 26 23:09:48 crc kubenswrapper[4995]: W0126 23:09:48.036037 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod492ea284_e9af_45ce_ac55_c5d8168be715.slice/crio-e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f WatchSource:0}: Error finding container e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f: Status 404 returned error can't find the container with id e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.046585 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.084137 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7bb\" (UniqueName: \"kubernetes.io/projected/4b695371-523f-41fd-a8de-6bbc9ce319e0-kube-api-access-4d7bb\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.100474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/321948cb-6f71-4375-b575-ee960cd49bc2-kube-api-access-276q6\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.129815 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5cvn\" (UniqueName: \"kubernetes.io/projected/24dc4d5e-e13d-4d4d-b1f8-390149f24544-kube-api-access-v5cvn\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.144213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.145692 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wk6w\" (UniqueName: \"kubernetes.io/projected/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-kube-api-access-2wk6w\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.160293 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfbq\" (UniqueName: \"kubernetes.io/projected/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-kube-api-access-fzfbq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.166771 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.190737 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sgtz\" (UniqueName: \"kubernetes.io/projected/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-kube-api-access-5sgtz\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.202553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.223903 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krr67\" (UniqueName: \"kubernetes.io/projected/6ff36f00-70ac-4a9c-96f6-ade70040b187-kube-api-access-krr67\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.249733 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96211e14-9e17-4511-8523-609ff907f5c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.252330 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.264497 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.265448 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rcc\" (UniqueName: \"kubernetes.io/projected/053917dd-5476-46d8-b9d4-2a1433d86697-kube-api-access-24rcc\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.275666 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.285907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfxs\" (UniqueName: \"kubernetes.io/projected/dedff685-1753-453d-a4ec-4e48b74cfdc4-kube-api-access-8tfxs\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.302093 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e85666ee-5696-465c-9682-802e968660ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.313493 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.326823 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.345641 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4r5mm"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.347268 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.367619 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.409311 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh55c"] Jan 26 23:09:48 crc kubenswrapper[4995]: W0126 23:09:48.420987 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod321948cb_6f71_4375_b575_ee960cd49bc2.slice/crio-2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518 WatchSource:0}: Error finding container 2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518: Status 404 returned error can't find the container with id 2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518 Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.427499 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430593 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec91f390-afe7-440e-b452-3f0bd7e65862-metrics-tls\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430661 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-srv-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430741 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430758 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841a4225-c083-4025-bd1e-c6cd2ebf2b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430773 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430841 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430909 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430929 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430987 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7rn\" (UniqueName: \"kubernetes.io/projected/41fedfb8-9381-43a2-8f78-2dea53ad7882-kube-api-access-bj7rn\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431011 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-config\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431044 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431067 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431085 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlfg\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-kube-api-access-xhlfg\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431245 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431348 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.431478 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:48.931467355 +0000 UTC m=+93.096174820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431685 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431719 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431739 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431769 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432040 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432064 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432128 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/841a4225-c083-4025-bd1e-c6cd2ebf2b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432180 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nqc\" (UniqueName: \"kubernetes.io/projected/466a813e-97dd-4113-b15c-1e0216edca40-kube-api-access-s2nqc\") pod \"migrator-59844c95c7-fk27l\" (UID: \"466a813e-97dd-4113-b15c-1e0216edca40\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432197 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn5n\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-kube-api-access-qwn5n\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432215 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432231 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdvk\" (UniqueName: \"kubernetes.io/projected/09fe04fa-126d-4c84-948f-55b13dad9e24-kube-api-access-lbdvk\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432267 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432283 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432304 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432326 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41fedfb8-9381-43a2-8f78-2dea53ad7882-metrics-tls\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432348 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkq2\" (UniqueName: \"kubernetes.io/projected/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-kube-api-access-5gkq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432431 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432506 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432539 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432567 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432599 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec91f390-afe7-440e-b452-3f0bd7e65862-trusted-ca\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432644 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432725 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.447305 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.456295 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.459912 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-crsqt"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.477219 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.494967 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.497963 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.514209 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:48 crc kubenswrapper[4995]: W0126 23:09:48.517051 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e46628e_0c8d_4128_b57c_ad324ff9f9bc.slice/crio-a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d WatchSource:0}: Error finding container a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d: Status 404 returned error can't find the container with id a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.522361 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.532285 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.533527 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.533760 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.033736327 +0000 UTC m=+93.198443792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534081 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-plugins-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgj8\" (UniqueName: \"kubernetes.io/projected/8d0941b6-29be-464b-91b9-ecd2e8545dc0-kube-api-access-zfgj8\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534240 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-cabundle\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534304 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534352 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/841a4225-c083-4025-bd1e-c6cd2ebf2b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0941b6-29be-464b-91b9-ecd2e8545dc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534397 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8xg\" (UniqueName: \"kubernetes.io/projected/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-kube-api-access-9x8xg\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534446 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534475 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534554 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nqc\" (UniqueName: \"kubernetes.io/projected/466a813e-97dd-4113-b15c-1e0216edca40-kube-api-access-s2nqc\") pod \"migrator-59844c95c7-fk27l\" (UID: \"466a813e-97dd-4113-b15c-1e0216edca40\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534605 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwn5n\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-kube-api-access-qwn5n\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534642 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534691 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdvk\" (UniqueName: \"kubernetes.io/projected/09fe04fa-126d-4c84-948f-55b13dad9e24-kube-api-access-lbdvk\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534737 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-registration-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534756 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534774 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41fedfb8-9381-43a2-8f78-2dea53ad7882-metrics-tls\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534822 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxdsj\" (UniqueName: \"kubernetes.io/projected/3272988d-332d-4fe7-a794-c262bb6d8e11-kube-api-access-jxdsj\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534857 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkq2\" (UniqueName: \"kubernetes.io/projected/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-kube-api-access-5gkq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534971 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-tmpfs\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535026 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535048 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thpnm\" (UniqueName: \"kubernetes.io/projected/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-kube-api-access-thpnm\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535069 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535119 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4cr4\" (UniqueName: \"kubernetes.io/projected/119edb68-a6b6-4bdf-9f74-c14211a24ecd-kube-api-access-g4cr4\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535140 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535202 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hsj\" (UniqueName: \"kubernetes.io/projected/c9544187-4d8b-4764-bfdb-067d6d6d06b4-kube-api-access-z5hsj\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535236 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535312 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec91f390-afe7-440e-b452-3f0bd7e65862-trusted-ca\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535422 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-key\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535469 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-cert\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535516 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535566 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535606 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535641 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec91f390-afe7-440e-b452-3f0bd7e65862-metrics-tls\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535686 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-srv-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535705 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skv4h\" (UniqueName: \"kubernetes.io/projected/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-kube-api-access-skv4h\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535734 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535751 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535797 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0941b6-29be-464b-91b9-ecd2e8545dc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535827 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535844 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-webhook-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535862 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-csi-data-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535907 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841a4225-c083-4025-bd1e-c6cd2ebf2b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535930 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535963 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-mountpoint-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536083 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536116 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfnd\" (UniqueName: \"kubernetes.io/projected/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-kube-api-access-8nfnd\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536135 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1b8e08-3212-4197-a8e7-db12babb6414-config-volume\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536155 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536170 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-socket-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536211 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrcn\" (UniqueName: \"kubernetes.io/projected/ab1b8e08-3212-4197-a8e7-db12babb6414-kube-api-access-dxrcn\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536236 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536252 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab1b8e08-3212-4197-a8e7-db12babb6414-metrics-tls\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536294 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7rn\" (UniqueName: \"kubernetes.io/projected/41fedfb8-9381-43a2-8f78-2dea53ad7882-kube-api-access-bj7rn\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536312 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgg7n\" (UniqueName: \"kubernetes.io/projected/480d13a8-eecc-4614-9b43-fd3fb5f28695-kube-api-access-rgg7n\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536333 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-config\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536351 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-profile-collector-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536559 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536582 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhlfg\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-kube-api-access-xhlfg\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536605 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536636 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536653 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9544187-4d8b-4764-bfdb-067d6d6d06b4-config\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536668 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-srv-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536715 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.537148 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.03713552 +0000 UTC m=+93.201842985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.537188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.537942 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538452 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-certs\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538496 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538541 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538544 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538561 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538584 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-node-bootstrap-token\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538605 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538630 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9544187-4d8b-4764-bfdb-067d6d6d06b4-serving-cert\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119edb68-a6b6-4bdf-9f74-c14211a24ecd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.539901 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.540342 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec91f390-afe7-440e-b452-3f0bd7e65862-trusted-ca\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.546616 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548156 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548192 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548211 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548459 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548977 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/841a4225-c083-4025-bd1e-c6cd2ebf2b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549005 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549332 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549411 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549756 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.551185 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.553022 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841a4225-c083-4025-bd1e-c6cd2ebf2b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.556017 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.556212 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.556622 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec91f390-afe7-440e-b452-3f0bd7e65862-metrics-tls\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.557345 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-config\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.558172 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.558399 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.559762 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.561506 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.569930 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.574355 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41fedfb8-9381-43a2-8f78-2dea53ad7882-metrics-tls\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.579540 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.587468 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.587593 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwn5n\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-kube-api-access-qwn5n\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.587634 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-srv-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.607715 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.633926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhlfg\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-kube-api-access-xhlfg\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641413 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641554 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641583 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skv4h\" (UniqueName: \"kubernetes.io/projected/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-kube-api-access-skv4h\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641600 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641621 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0941b6-29be-464b-91b9-ecd2e8545dc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641643 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-webhook-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641658 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-csi-data-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-mountpoint-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641701 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfnd\" (UniqueName: \"kubernetes.io/projected/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-kube-api-access-8nfnd\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1b8e08-3212-4197-a8e7-db12babb6414-config-volume\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641734 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-socket-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641749 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrcn\" (UniqueName: \"kubernetes.io/projected/ab1b8e08-3212-4197-a8e7-db12babb6414-kube-api-access-dxrcn\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab1b8e08-3212-4197-a8e7-db12babb6414-metrics-tls\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641789 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgg7n\" (UniqueName: \"kubernetes.io/projected/480d13a8-eecc-4614-9b43-fd3fb5f28695-kube-api-access-rgg7n\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641809 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-profile-collector-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641833 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641849 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9544187-4d8b-4764-bfdb-067d6d6d06b4-config\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641864 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-srv-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642574 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-certs\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-node-bootstrap-token\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642619 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642643 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119edb68-a6b6-4bdf-9f74-c14211a24ecd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642657 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9544187-4d8b-4764-bfdb-067d6d6d06b4-serving-cert\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642674 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-plugins-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642697 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfgj8\" (UniqueName: \"kubernetes.io/projected/8d0941b6-29be-464b-91b9-ecd2e8545dc0-kube-api-access-zfgj8\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642712 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-cabundle\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0941b6-29be-464b-91b9-ecd2e8545dc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642753 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8xg\" (UniqueName: \"kubernetes.io/projected/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-kube-api-access-9x8xg\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642770 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642809 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-registration-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642828 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxdsj\" (UniqueName: \"kubernetes.io/projected/3272988d-332d-4fe7-a794-c262bb6d8e11-kube-api-access-jxdsj\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642855 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-tmpfs\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642870 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thpnm\" (UniqueName: \"kubernetes.io/projected/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-kube-api-access-thpnm\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642910 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4cr4\" (UniqueName: \"kubernetes.io/projected/119edb68-a6b6-4bdf-9f74-c14211a24ecd-kube-api-access-g4cr4\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642925 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642942 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5hsj\" (UniqueName: \"kubernetes.io/projected/c9544187-4d8b-4764-bfdb-067d6d6d06b4-kube-api-access-z5hsj\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642968 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-key\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642982 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-cert\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.645769 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-cert\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.646907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.647693 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.650873 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-node-bootstrap-token\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.651092 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-registration-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.651570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-tmpfs\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.651671 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-certs\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.651761 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.151743011 +0000 UTC m=+93.316450476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.654431 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.655340 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.655502 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.656088 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0941b6-29be-464b-91b9-ecd2e8545dc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658291 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-webhook-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658341 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-csi-data-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658403 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-mountpoint-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658811 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-plugins-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.659798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-cabundle\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.660050 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-socket-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.662777 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-profile-collector-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.663653 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.664317 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119edb68-a6b6-4bdf-9f74-c14211a24ecd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.667225 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-srv-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.677857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0941b6-29be-464b-91b9-ecd2e8545dc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.677863 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab1b8e08-3212-4197-a8e7-db12babb6414-metrics-tls\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.678291 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9544187-4d8b-4764-bfdb-067d6d6d06b4-config\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.679282 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9544187-4d8b-4764-bfdb-067d6d6d06b4-serving-cert\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.680220 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-key\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.681155 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1b8e08-3212-4197-a8e7-db12babb6414-config-volume\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.682940 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nqc\" (UniqueName: \"kubernetes.io/projected/466a813e-97dd-4113-b15c-1e0216edca40-kube-api-access-s2nqc\") pod \"migrator-59844c95c7-fk27l\" (UID: \"466a813e-97dd-4113-b15c-1e0216edca40\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.686936 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.706759 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdvk\" (UniqueName: \"kubernetes.io/projected/09fe04fa-126d-4c84-948f-55b13dad9e24-kube-api-access-lbdvk\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.725696 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkq2\" (UniqueName: \"kubernetes.io/projected/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-kube-api-access-5gkq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.728488 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.734159 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.749332 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.750299 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.250281863 +0000 UTC m=+93.414989328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.751331 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.765093 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.765920 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7rn\" (UniqueName: \"kubernetes.io/projected/41fedfb8-9381-43a2-8f78-2dea53ad7882-kube-api-access-bj7rn\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.770139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v665q"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.784376 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.798037 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.805596 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.809958 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.846334 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.850272 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.850854 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.350835514 +0000 UTC m=+93.515542979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.868090 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxdsj\" (UniqueName: \"kubernetes.io/projected/3272988d-332d-4fe7-a794-c262bb6d8e11-kube-api-access-jxdsj\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.868370 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.880839 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.884135 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thpnm\" (UniqueName: \"kubernetes.io/projected/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-kube-api-access-thpnm\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.904290 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.924443 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4cr4\" (UniqueName: \"kubernetes.io/projected/119edb68-a6b6-4bdf-9f74-c14211a24ecd-kube-api-access-g4cr4\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.925183 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skv4h\" (UniqueName: \"kubernetes.io/projected/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-kube-api-access-skv4h\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.925996 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.941963 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5hsj\" (UniqueName: \"kubernetes.io/projected/c9544187-4d8b-4764-bfdb-067d6d6d06b4-kube-api-access-z5hsj\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.952340 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.953037 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.452694716 +0000 UTC m=+93.617402171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.958809 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.966340 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrcn\" (UniqueName: \"kubernetes.io/projected/ab1b8e08-3212-4197-a8e7-db12babb6414-kube-api-access-dxrcn\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.969391 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.973523 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.989742 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfnd\" (UniqueName: \"kubernetes.io/projected/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-kube-api-access-8nfnd\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.010492 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfgj8\" (UniqueName: \"kubernetes.io/projected/8d0941b6-29be-464b-91b9-ecd2e8545dc0-kube-api-access-zfgj8\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.026231 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgg7n\" (UniqueName: \"kubernetes.io/projected/480d13a8-eecc-4614-9b43-fd3fb5f28695-kube-api-access-rgg7n\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.036658 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.054069 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.054463 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.554448116 +0000 UTC m=+93.719155581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.055019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.068287 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8xg\" (UniqueName: \"kubernetes.io/projected/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-kube-api-access-9x8xg\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.070676 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" event={"ID":"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba","Type":"ContainerStarted","Data":"fdea8b0c418edfb48588d14ff27888d2ae3c0eb483299fd04faef569333e6eda"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.093219 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tw45t" event={"ID":"24dc4d5e-e13d-4d4d-b1f8-390149f24544","Type":"ContainerStarted","Data":"c9b20b52a0f18ec9712faa056f61b19c7cdb8212487a56b1c5f4717c2628f871"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.093265 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tw45t" event={"ID":"24dc4d5e-e13d-4d4d-b1f8-390149f24544","Type":"ContainerStarted","Data":"6244d662c35438f9f0b7fb0195f92df949f74ffed8d054990d997743c5a7aed9"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.094983 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerStarted","Data":"f8da331ad5479ba2deada0b967ed7ea0fd7ef2bec4a402a501182d5512dc16e8"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.098848 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerStarted","Data":"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.099149 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.116564 4995 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qgp7d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.116610 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.118084 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" event={"ID":"b345a51c-ec48-4066-a49b-713e73429c2d","Type":"ContainerStarted","Data":"6fd6e36ee51a843b166cff419e72e3a4c2b8aa612494781178c175d203e9e522"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.118146 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" event={"ID":"b345a51c-ec48-4066-a49b-713e73429c2d","Type":"ContainerStarted","Data":"37f9c83ea30ad860cd20b48ac40f89a5da1b31a207ad12326342bbd5724e8f42"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.120087 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" event={"ID":"49ad869c-a391-4d0b-99fa-74e9d7ef4e87","Type":"ContainerStarted","Data":"aeb5e8675e5432ec2f975c8753f3114b7245f1a9f137f445c5910713e45ab72f"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.120134 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" event={"ID":"49ad869c-a391-4d0b-99fa-74e9d7ef4e87","Type":"ContainerStarted","Data":"8cac44f772bd2c32925480f955b085667662b01fbe75994873ebd78a8f7af5ca"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.124016 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.137573 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" event={"ID":"4b695371-523f-41fd-a8de-6bbc9ce319e0","Type":"ContainerStarted","Data":"959992f9a8bbbf6fc66596650fcd767418dbf672b1e8093ffe33becc678071ca"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.137606 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" event={"ID":"4b695371-523f-41fd-a8de-6bbc9ce319e0","Type":"ContainerStarted","Data":"44f8c92f494763f6b6e1265078a24a2a0549eaaf74bbfcf09562e0b8234afe66"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.138231 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.142366 4995 patch_prober.go:28] interesting pod/console-operator-58897d9998-4r5mm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.142405 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" podUID="4b695371-523f-41fd-a8de-6bbc9ce319e0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.158534 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pfw4t" event={"ID":"ce7a362e-896b-4492-ac2c-08bd19bba7b4","Type":"ContainerStarted","Data":"b7cb9a79d82b0aa5048b3d1e45243664ced238be2f1ae2225e2202f12d4aaf1b"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.158569 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pfw4t" event={"ID":"ce7a362e-896b-4492-ac2c-08bd19bba7b4","Type":"ContainerStarted","Data":"4cd69f16d57d53d32131d187ff3a24fd15cad9aa0917d7fa171ecd9a9da1b143"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.159299 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.159344 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.160688 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.660667674 +0000 UTC m=+93.825375179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.173058 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.173126 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.177786 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" event={"ID":"492ea284-e9af-45ce-ac55-c5d8168be715","Type":"ContainerStarted","Data":"99114bb9953be1339bd024eddcc2898314389027efa02c8a1e1729d06736c331"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.177834 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" event={"ID":"492ea284-e9af-45ce-ac55-c5d8168be715","Type":"ContainerStarted","Data":"e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.182380 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" event={"ID":"8e46628e-0c8d-4128-b57c-ad324ff9f9bc","Type":"ContainerStarted","Data":"942c0ce88f527e4fce712a6de5fab759daf8ffa24382477ded4546a39e4e7c88"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.182414 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" event={"ID":"8e46628e-0c8d-4128-b57c-ad324ff9f9bc","Type":"ContainerStarted","Data":"a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.189617 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.191397 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerStarted","Data":"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.192023 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.192969 4995 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zp6fr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.192998 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.193431 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.200779 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.204267 4995 generic.go:334] "Generic (PLEG): container finished" podID="cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26" containerID="ca28e7af992f7de335914ea87f9bbb5022d986d9dc7cdd971265c095169898fe" exitCode=0 Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.204364 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" event={"ID":"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26","Type":"ContainerDied","Data":"ca28e7af992f7de335914ea87f9bbb5022d986d9dc7cdd971265c095169898fe"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.207359 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.214353 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.215334 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerStarted","Data":"d0c5022bc8c220348c16ec918943e215fdb768799fd02e688e4e67a379a01657"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.215363 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerStarted","Data":"2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.216926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerStarted","Data":"923e85ff5e2386df613ffe2279edd58b871c7d79968ab0c23e29f57d3983fbd9"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.219010 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" event={"ID":"d8feb049-3911-43fa-bd25-6ecee076d1ed","Type":"ContainerStarted","Data":"ff06d977cce16804ac970033cc5547543b42e97a49c256279e04328924fe630e"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.219402 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.250134 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.254013 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.261091 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.261231 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.761202814 +0000 UTC m=+93.925910289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.261319 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.261923 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.262221 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.762206028 +0000 UTC m=+93.926913563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.276764 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.286977 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.304767 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jr8qp"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.371380 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.372371 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.372609 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.378133 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.878083111 +0000 UTC m=+94.042790586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.378373 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.378760 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.878748557 +0000 UTC m=+94.043456022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: W0126 23:09:49.432558 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ff36f00_70ac_4a9c_96f6_ade70040b187.slice/crio-9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4 WatchSource:0}: Error finding container 9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4: Status 404 returned error can't find the container with id 9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4 Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.480999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.484256 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.984226087 +0000 UTC m=+94.148933552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.520330 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" podStartSLOduration=71.520311143 podStartE2EDuration="1m11.520311143s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.519152265 +0000 UTC m=+93.683859730" watchObservedRunningTime="2026-01-26 23:09:49.520311143 +0000 UTC m=+93.685018608" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.583348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.584561 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.084544962 +0000 UTC m=+94.249252427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.599383 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-pfw4t" podStartSLOduration=72.599361272 podStartE2EDuration="1m12.599361272s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.595910268 +0000 UTC m=+93.760617743" watchObservedRunningTime="2026-01-26 23:09:49.599361272 +0000 UTC m=+93.764068737" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.685746 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.686158 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.186130018 +0000 UTC m=+94.350837493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.690357 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.694579 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.775586 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.787805 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.788080 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.288068932 +0000 UTC m=+94.452776397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.823124 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" podStartSLOduration=72.823095322 podStartE2EDuration="1m12.823095322s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.778354916 +0000 UTC m=+93.943062381" watchObservedRunningTime="2026-01-26 23:09:49.823095322 +0000 UTC m=+93.987802787" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.836843 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" podStartSLOduration=72.836825065 podStartE2EDuration="1m12.836825065s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.823778959 +0000 UTC m=+93.988486424" watchObservedRunningTime="2026-01-26 23:09:49.836825065 +0000 UTC m=+94.001532530" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.837803 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.878248 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.884611 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" podStartSLOduration=72.884592235 podStartE2EDuration="1m12.884592235s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.884312858 +0000 UTC m=+94.049020323" watchObservedRunningTime="2026-01-26 23:09:49.884592235 +0000 UTC m=+94.049299720" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.888438 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.888917 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.388901289 +0000 UTC m=+94.553608754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.896511 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.925928 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" podStartSLOduration=72.925904257 podStartE2EDuration="1m12.925904257s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.91943886 +0000 UTC m=+94.084146315" watchObservedRunningTime="2026-01-26 23:09:49.925904257 +0000 UTC m=+94.090611722" Jan 26 23:09:49 crc kubenswrapper[4995]: W0126 23:09:49.977027 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d4d9e36_8d49_41a8_a04b_194a5f652f94.slice/crio-0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350 WatchSource:0}: Error finding container 0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350: Status 404 returned error can't find the container with id 0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350 Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.992555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.992875 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.492862262 +0000 UTC m=+94.657569737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.001855 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" podStartSLOduration=73.00183988 podStartE2EDuration="1m13.00183988s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.999666968 +0000 UTC m=+94.164374433" watchObservedRunningTime="2026-01-26 23:09:50.00183988 +0000 UTC m=+94.166547345" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.051692 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" podStartSLOduration=73.05167607 podStartE2EDuration="1m13.05167607s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.050930812 +0000 UTC m=+94.215638287" watchObservedRunningTime="2026-01-26 23:09:50.05167607 +0000 UTC m=+94.216383535" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.104274 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.105750 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.605720272 +0000 UTC m=+94.770427747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.158743 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" podStartSLOduration=73.158722088 podStartE2EDuration="1m13.158722088s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.135512425 +0000 UTC m=+94.300219890" watchObservedRunningTime="2026-01-26 23:09:50.158722088 +0000 UTC m=+94.323429553" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.177988 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wt84d"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.192801 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.212687 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.213046 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.713032336 +0000 UTC m=+94.877739801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.263127 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:50 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:50 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:50 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.263187 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.298671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" event={"ID":"75e69d02-9a6a-4bea-b3f5-1537ef5e2516","Type":"ContainerStarted","Data":"04199cb3f9efb2d7bb8fe668230301dc989ef5f0b9fcfab14e8a09e18ee33f31"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.298718 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" event={"ID":"75e69d02-9a6a-4bea-b3f5-1537ef5e2516","Type":"ContainerStarted","Data":"de6f50d204e5dc9547a3689aa868d0bb5d709a60df0715a724d272a4b031cf25"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.314241 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.314823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.814804246 +0000 UTC m=+94.979511711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.314837 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" event={"ID":"841a4225-c083-4025-bd1e-c6cd2ebf2b85","Type":"ContainerStarted","Data":"2179003cd7939a975598e3016ba0b03669026ac2c8692a0b6526139191f6dabc"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.331722 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" podStartSLOduration=73.331705877 podStartE2EDuration="1m13.331705877s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.329506453 +0000 UTC m=+94.494213918" watchObservedRunningTime="2026-01-26 23:09:50.331705877 +0000 UTC m=+94.496413342" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.332352 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw55h"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.337713 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" event={"ID":"96211e14-9e17-4511-8523-609ff907f5c5","Type":"ContainerStarted","Data":"138e1b9eafa611ecccd5af5f57dd4b102b352744b3feb0f19c2d0dbc6d5c17ec"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.337774 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" event={"ID":"96211e14-9e17-4511-8523-609ff907f5c5","Type":"ContainerStarted","Data":"d140e4cfb51c291288bdc459112677398991ad415b55382743a58000ff18cafa"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.352082 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.352893 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.360139 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" event={"ID":"dedff685-1753-453d-a4ec-4e48b74cfdc4","Type":"ContainerStarted","Data":"16c116e2dcc97a59480ce16ed4abe4a87f4cd815400f757d482908a17bc8e17b"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.373872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" event={"ID":"ec91f390-afe7-440e-b452-3f0bd7e65862","Type":"ContainerStarted","Data":"ba6fdd27dee74df98e17201dbc91afb8e201e8ca3541029b31971982d4cc576c"} Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.373972 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41fedfb8_9381_43a2_8f78_2dea53ad7882.slice/crio-209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627 WatchSource:0}: Error finding container 209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627: Status 404 returned error can't find the container with id 209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.384474 4995 generic.go:334] "Generic (PLEG): container finished" podID="ee963cde-b7bc-4699-9b45-aaa3b7df0e38" containerID="9f45b7e58337e68bb27dec66942c772f61c0d530e6672fd4c8fe1efec8aaa2a3" exitCode=0 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.384799 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerDied","Data":"9f45b7e58337e68bb27dec66942c772f61c0d530e6672fd4c8fe1efec8aaa2a3"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.384829 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k4xnx"] Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.398015 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod466a813e_97dd_4113_b15c_1e0216edca40.slice/crio-e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9 WatchSource:0}: Error finding container e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9: Status 404 returned error can't find the container with id e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9 Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.398899 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09fe04fa_126d_4c84_948f_55b13dad9e24.slice/crio-339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6 WatchSource:0}: Error finding container 339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6: Status 404 returned error can't find the container with id 339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.399208 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" event={"ID":"e85666ee-5696-465c-9682-802e968660ec","Type":"ContainerStarted","Data":"6711724abcc04497430771ccde92985e3aa378bddde6826bf85e7a8b5846f861"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.415493 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.416823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.916807412 +0000 UTC m=+95.081514867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.449767 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tw45t" podStartSLOduration=73.449750562 podStartE2EDuration="1m13.449750562s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.449723611 +0000 UTC m=+94.614431086" watchObservedRunningTime="2026-01-26 23:09:50.449750562 +0000 UTC m=+94.614458027" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.495262 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.499660 4995 generic.go:334] "Generic (PLEG): container finished" podID="321948cb-6f71-4375-b575-ee960cd49bc2" containerID="d0c5022bc8c220348c16ec918943e215fdb768799fd02e688e4e67a379a01657" exitCode=0 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.499723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerDied","Data":"d0c5022bc8c220348c16ec918943e215fdb768799fd02e688e4e67a379a01657"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.499749 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerStarted","Data":"f36e5f721faf9a70dfb86966046c5b8a1bdf9ed26f64168308ee1787e8bafa4a"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.500593 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.502387 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.517328 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.519944 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.019921575 +0000 UTC m=+95.184629030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.562784 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerStarted","Data":"0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.562836 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh"] Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.589047 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7943ea01_9b7a_4a9b_9b13_6ef8203dd43b.slice/crio-2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea WatchSource:0}: Error finding container 2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea: Status 404 returned error can't find the container with id 2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.590139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.591960 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" event={"ID":"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba","Type":"ContainerStarted","Data":"e12c2d1e4d04138049caecf9ae6ff19e5870afc1ee3de3084ebf8ae47b9bddcc"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.591985 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" event={"ID":"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba","Type":"ContainerStarted","Data":"2c9ba8302b72d83c238c6a51c10bac7413f42197cd395baa6cd3b0c1f2856c8e"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.603035 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tsdjk" event={"ID":"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8","Type":"ContainerStarted","Data":"f0061fddd3128478f8e5c61bf68ad5a3522fb0ef86078f5d0b706cffc8c22d1b"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.603084 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tsdjk" event={"ID":"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8","Type":"ContainerStarted","Data":"8f583dd5d40e1a11244a9041977046e68f749d9500439b3cae42253f19bc07fd"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.614227 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" event={"ID":"6ff36f00-70ac-4a9c-96f6-ade70040b187","Type":"ContainerStarted","Data":"9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.620055 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8m6w4"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.621046 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.621350 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.121332736 +0000 UTC m=+95.286040201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.702961 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerStarted","Data":"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.729202 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.729504 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.229470531 +0000 UTC m=+95.394177996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.729620 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.730773 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.230756252 +0000 UTC m=+95.395463797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.735367 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x9shl"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.737912 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" podStartSLOduration=73.737890685 podStartE2EDuration="1m13.737890685s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.730232239 +0000 UTC m=+94.894939704" watchObservedRunningTime="2026-01-26 23:09:50.737890685 +0000 UTC m=+94.902598150" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.738216 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" event={"ID":"da8ddf95-03f1-4cce-8ddb-22ea3735eb59","Type":"ContainerStarted","Data":"f7f3602af6d83ee299345cf560dac252f50bc08d861a55ec7e4343edb9599215"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.745685 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" event={"ID":"053917dd-5476-46d8-b9d4-2a1433d86697","Type":"ContainerStarted","Data":"19ad45a549b4780560be836102a3911d01d8cfa0afeb9e847667e7997f8505d4"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.745724 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" event={"ID":"053917dd-5476-46d8-b9d4-2a1433d86697","Type":"ContainerStarted","Data":"a0e9db7c70df270c1aab27804f498df0ec21cbb1ef14b3800e9ad6e46c8502df"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.748077 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.748117 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.748975 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.752211 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.753054 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z4xpf"] Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.775507 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3272988d_332d_4fe7_a794_c262bb6d8e11.slice/crio-6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6 WatchSource:0}: Error finding container 6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6: Status 404 returned error can't find the container with id 6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.803555 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.834302 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.835748 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.33572651 +0000 UTC m=+95.500433965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.936714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.937152 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.437131781 +0000 UTC m=+95.601839256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.006265 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" podStartSLOduration=74.006244449 podStartE2EDuration="1m14.006244449s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.001619677 +0000 UTC m=+95.166327152" watchObservedRunningTime="2026-01-26 23:09:51.006244449 +0000 UTC m=+95.170951924" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.038191 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.038610 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.538591204 +0000 UTC m=+95.703298669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.140027 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.140514 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.640502477 +0000 UTC m=+95.805209942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.159973 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zt9nn" podStartSLOduration=74.15995576 podStartE2EDuration="1m14.15995576s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.158635878 +0000 UTC m=+95.323343333" watchObservedRunningTime="2026-01-26 23:09:51.15995576 +0000 UTC m=+95.324663225" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.265082 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.266230 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.766205819 +0000 UTC m=+95.930913294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.273887 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" podStartSLOduration=74.273862854 podStartE2EDuration="1m14.273862854s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.271399365 +0000 UTC m=+95.436106850" watchObservedRunningTime="2026-01-26 23:09:51.273862854 +0000 UTC m=+95.438570319" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.310918 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:51 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:51 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:51 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.310982 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.366952 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.367412 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.867376554 +0000 UTC m=+96.032084069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.407601 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" podStartSLOduration=74.4075867 podStartE2EDuration="1m14.4075867s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.407063097 +0000 UTC m=+95.571770562" watchObservedRunningTime="2026-01-26 23:09:51.4075867 +0000 UTC m=+95.572294165" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.432432 4995 csr.go:261] certificate signing request csr-slcmk is approved, waiting to be issued Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.449965 4995 csr.go:257] certificate signing request csr-slcmk is issued Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.464488 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" podStartSLOduration=74.464471561 podStartE2EDuration="1m14.464471561s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.462579475 +0000 UTC m=+95.627286940" watchObservedRunningTime="2026-01-26 23:09:51.464471561 +0000 UTC m=+95.629179026" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.467822 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.468224 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.968205331 +0000 UTC m=+96.132912796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.508438 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.539394 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tsdjk" podStartSLOduration=6.539379049 podStartE2EDuration="6.539379049s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.53900949 +0000 UTC m=+95.703716955" watchObservedRunningTime="2026-01-26 23:09:51.539379049 +0000 UTC m=+95.704086514" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.572053 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.572396 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.07238319 +0000 UTC m=+96.237090655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.675716 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.676269 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.176253841 +0000 UTC m=+96.340961306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.777615 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.777995 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.277982499 +0000 UTC m=+96.442689974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.779583 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" event={"ID":"e85666ee-5696-465c-9682-802e968660ec","Type":"ContainerStarted","Data":"5bfcbd075cde3aa81de266cf36c70ef7981337818ad10d4986e59d33b2b2eca7"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.783041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" event={"ID":"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b","Type":"ContainerStarted","Data":"adbdf0fe767678525a4521c890a3008330b01b81971d08f068bc228c05d82eb4"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.783070 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" event={"ID":"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b","Type":"ContainerStarted","Data":"2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.783804 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.795261 4995 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nglhh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.795304 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" podUID="7943ea01-9b7a-4a9b-9b13-6ef8203dd43b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.796183 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerStarted","Data":"a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.796215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerStarted","Data":"052973f6fc62d2870635d2389e1e0d1e76e71a306a0edffd354da85ca2cc2015"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.817829 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" podStartSLOduration=74.817813376 podStartE2EDuration="1m14.817813376s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.815404607 +0000 UTC m=+95.980112072" watchObservedRunningTime="2026-01-26 23:09:51.817813376 +0000 UTC m=+95.982520841" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.823882 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" event={"ID":"c9544187-4d8b-4764-bfdb-067d6d6d06b4","Type":"ContainerStarted","Data":"9dbb95fddc9eac5cf0bf1fd19f34d55bb35056a926522f3d25044db55f895b3c"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.867009 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" event={"ID":"41fedfb8-9381-43a2-8f78-2dea53ad7882","Type":"ContainerStarted","Data":"3e759b12e81af7b0175ab715bf9ec94beacbe5d6a93e6fedb53a3dbf3e4469ed"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.867066 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" event={"ID":"41fedfb8-9381-43a2-8f78-2dea53ad7882","Type":"ContainerStarted","Data":"209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.878634 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.879942 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.379922923 +0000 UTC m=+96.544630398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.902395 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" podStartSLOduration=74.902372948 podStartE2EDuration="1m14.902372948s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.851252317 +0000 UTC m=+96.015959782" watchObservedRunningTime="2026-01-26 23:09:51.902372948 +0000 UTC m=+96.067080413" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.902744 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" podStartSLOduration=74.902735747 podStartE2EDuration="1m14.902735747s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.899695893 +0000 UTC m=+96.064403368" watchObservedRunningTime="2026-01-26 23:09:51.902735747 +0000 UTC m=+96.067443222" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.958495 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" event={"ID":"480d13a8-eecc-4614-9b43-fd3fb5f28695","Type":"ContainerStarted","Data":"261acd69eb386ddaf479b9993cf8fbe37e59621b58cb52cf565e7595f52df018"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.972212 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" event={"ID":"da8ddf95-03f1-4cce-8ddb-22ea3735eb59","Type":"ContainerStarted","Data":"66b5b4292a2a7c23d32feaccd6011afd85110cf71d55ebb1e116577fe571d501"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.981921 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.982259 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.482247647 +0000 UTC m=+96.646955112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.995326 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" event={"ID":"053917dd-5476-46d8-b9d4-2a1433d86697","Type":"ContainerStarted","Data":"ff78c88e4de9e3986b9995d8728e4636db331afa8f5bcb23a1f0ae74ab076d20"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.019436 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" podStartSLOduration=75.019415149 podStartE2EDuration="1m15.019415149s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.019417139 +0000 UTC m=+96.184124604" watchObservedRunningTime="2026-01-26 23:09:52.019415149 +0000 UTC m=+96.184122614" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.048308 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8m6w4" event={"ID":"475d4d77-5500-4d9d-8d5f-c9fe0f47364b","Type":"ContainerStarted","Data":"c52cdb7f64b66074ef00da2a8e7641edd06c5d26c86beecfa122f21f916d388f"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.048365 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8m6w4" event={"ID":"475d4d77-5500-4d9d-8d5f-c9fe0f47364b","Type":"ContainerStarted","Data":"2cdc2a411873965984b12b626c218baa7413ee8cd7c72cdfd4e50a4b4aec5e30"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.094368 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" podStartSLOduration=75.094349558 podStartE2EDuration="1m15.094349558s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.062486114 +0000 UTC m=+96.227193589" watchObservedRunningTime="2026-01-26 23:09:52.094349558 +0000 UTC m=+96.259057023" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.097203 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.098315 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.598301034 +0000 UTC m=+96.763008499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.104701 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" event={"ID":"119edb68-a6b6-4bdf-9f74-c14211a24ecd","Type":"ContainerStarted","Data":"be2e0b08ab75d269003ff41dcd04a1efa5e2629a82770cafe9dee6ae2f712209"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.106496 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerStarted","Data":"f6a1dfaeb088a79efe2e9257f16373638bb8bbade02656c5af4a85ba1c062d9c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.131937 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" event={"ID":"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26","Type":"ContainerStarted","Data":"eca71026dc9f3b3dbe01cbb7bd01b700fb9a8d49cb36d1ff176aa5ee7d254757"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.152445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerStarted","Data":"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.152655 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.169716 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" podStartSLOduration=74.169697717 podStartE2EDuration="1m14.169697717s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.168260752 +0000 UTC m=+96.332968217" watchObservedRunningTime="2026-01-26 23:09:52.169697717 +0000 UTC m=+96.334405182" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.170932 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8m6w4" podStartSLOduration=7.170921936 podStartE2EDuration="7.170921936s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.096680814 +0000 UTC m=+96.261388279" watchObservedRunningTime="2026-01-26 23:09:52.170921936 +0000 UTC m=+96.335629411" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.173415 4995 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tzh2d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.173484 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.175196 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" event={"ID":"466a813e-97dd-4113-b15c-1e0216edca40","Type":"ContainerStarted","Data":"05dc86749f9dd2e3990900dd850ea9416fc306818b1bec95ac6a6321744177cc"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.175226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" event={"ID":"466a813e-97dd-4113-b15c-1e0216edca40","Type":"ContainerStarted","Data":"e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.192483 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" event={"ID":"09fe04fa-126d-4c84-948f-55b13dad9e24","Type":"ContainerStarted","Data":"4ffe565df59b5bc79fbfbed2c58fd33ab405ab72849f847128ce3ad63c0cf89c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.192525 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" event={"ID":"09fe04fa-126d-4c84-948f-55b13dad9e24","Type":"ContainerStarted","Data":"339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.194504 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.198492 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.200875 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.700854853 +0000 UTC m=+96.865562368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.206983 4995 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8llf9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.207049 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" podUID="09fe04fa-126d-4c84-948f-55b13dad9e24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.213337 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"2b7607e6165bf1d4a4546b4d547ad357b02e5110c760b8b5bf5d9c5983e7b8db"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.236049 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" event={"ID":"841a4225-c083-4025-bd1e-c6cd2ebf2b85","Type":"ContainerStarted","Data":"caa42be5399fec9cdf7e677d8727f49170bcdd061bffde68b0a8fbd63bbf5777"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.262363 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt84d" event={"ID":"ab1b8e08-3212-4197-a8e7-db12babb6414","Type":"ContainerStarted","Data":"dc24e5e0d1a418b24121e95a5c16f151f64b7a6516ad40a04a0f83b716c02a5c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.262409 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt84d" event={"ID":"ab1b8e08-3212-4197-a8e7-db12babb6414","Type":"ContainerStarted","Data":"c987d061c01df87edfd392b564ecafbeb38a05d30db68fe947bd28cb3a945eee"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.265359 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:52 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:52 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:52 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.265422 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.284402 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" event={"ID":"6ff36f00-70ac-4a9c-96f6-ade70040b187","Type":"ContainerStarted","Data":"8d662873249ff4c19879c2de852c98e29aa88240e0f38b5a5e3455df51ddb9ce"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.305357 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.307016 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.806994229 +0000 UTC m=+96.971701694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.309124 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" event={"ID":"ec91f390-afe7-440e-b452-3f0bd7e65862","Type":"ContainerStarted","Data":"177e8122a91f7debb79e36811c28cd0d462e25e0266e9ce5472208d8ab56d59e"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.309352 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" event={"ID":"ec91f390-afe7-440e-b452-3f0bd7e65862","Type":"ContainerStarted","Data":"32e5b0c926e714f88c49bb8f0d89c57260d146538e5aecf7e4305a05024e9d91"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.347537 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerStarted","Data":"f901f601e0243ea0adb58f7b81260269e5e87406c390fbde6045e9147797112d"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.347753 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" podStartSLOduration=75.347732438 podStartE2EDuration="1m15.347732438s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.235679498 +0000 UTC m=+96.400386973" watchObservedRunningTime="2026-01-26 23:09:52.347732438 +0000 UTC m=+96.512439923" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.348693 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.348802 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" podStartSLOduration=75.348794944 podStartE2EDuration="1m15.348794944s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.319995945 +0000 UTC m=+96.484703410" watchObservedRunningTime="2026-01-26 23:09:52.348794944 +0000 UTC m=+96.513502409" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.355507 4995 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phjts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.355570 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.358660 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" podStartSLOduration=75.358630172 podStartE2EDuration="1m15.358630172s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.349244394 +0000 UTC m=+96.513951859" watchObservedRunningTime="2026-01-26 23:09:52.358630172 +0000 UTC m=+96.523337667" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.360638 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.360703 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.371641 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" event={"ID":"dedff685-1753-453d-a4ec-4e48b74cfdc4","Type":"ContainerStarted","Data":"45a093783232fa31c18d58ec566f319c5ea1702bc8113b7bd398c26390a146b8"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.371698 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" event={"ID":"dedff685-1753-453d-a4ec-4e48b74cfdc4","Type":"ContainerStarted","Data":"2d33d284fe3cdd9b402299d5d737ea07c24cd58a2484eae421d6bdc615797f5d"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.386205 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" podStartSLOduration=75.386179041 podStartE2EDuration="1m15.386179041s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.382773158 +0000 UTC m=+96.547480633" watchObservedRunningTime="2026-01-26 23:09:52.386179041 +0000 UTC m=+96.550886506" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.387546 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" event={"ID":"8d0941b6-29be-464b-91b9-ecd2e8545dc0","Type":"ContainerStarted","Data":"347123cb30c0a17c4092f409f11db531a9b1ac23288bc2d3d58b8a9825e46ee9"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.387581 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" event={"ID":"8d0941b6-29be-464b-91b9-ecd2e8545dc0","Type":"ContainerStarted","Data":"dd0bf1ef65dc2b9ea2219ffc20c2e478ee6397d48e2bc37b205576281b56c88c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.407043 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.408658 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.908642656 +0000 UTC m=+97.073350121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.421805 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" event={"ID":"3272988d-332d-4fe7-a794-c262bb6d8e11","Type":"ContainerStarted","Data":"6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.421967 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podStartSLOduration=74.421956949 podStartE2EDuration="1m14.421956949s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.419984501 +0000 UTC m=+96.584691966" watchObservedRunningTime="2026-01-26 23:09:52.421956949 +0000 UTC m=+96.586664414" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.423753 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.423803 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.451168 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 23:04:51 +0000 UTC, rotation deadline is 2026-10-25 14:03:26.76835503 +0000 UTC Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.451226 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6518h53m34.31713222s for next certificate rotation Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.472355 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" podStartSLOduration=75.472334802 podStartE2EDuration="1m15.472334802s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.466281385 +0000 UTC m=+96.630988850" watchObservedRunningTime="2026-01-26 23:09:52.472334802 +0000 UTC m=+96.637042267" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.509579 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.514026 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.014004153 +0000 UTC m=+97.178711698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.523186 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" podStartSLOduration=75.523169286 podStartE2EDuration="1m15.523169286s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.521780102 +0000 UTC m=+96.686487567" watchObservedRunningTime="2026-01-26 23:09:52.523169286 +0000 UTC m=+96.687876761" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.545495 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.593916 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" podStartSLOduration=74.593894622 podStartE2EDuration="1m14.593894622s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.588945652 +0000 UTC m=+96.753653117" watchObservedRunningTime="2026-01-26 23:09:52.593894622 +0000 UTC m=+96.758602097" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.614286 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.614650 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.114635236 +0000 UTC m=+97.279342701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.641957 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" podStartSLOduration=75.641933738 podStartE2EDuration="1m15.641933738s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.63251219 +0000 UTC m=+96.797219655" watchObservedRunningTime="2026-01-26 23:09:52.641933738 +0000 UTC m=+96.806641203" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.717685 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.718038 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.218020775 +0000 UTC m=+97.382728240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.819327 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.819689 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.319673742 +0000 UTC m=+97.484381207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.920300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.920669 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.420651123 +0000 UTC m=+97.585358588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.971376 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.021741 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.022208 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.522195928 +0000 UTC m=+97.686903393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.123117 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.123305 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.623277731 +0000 UTC m=+97.787985196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.123437 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.123759 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.623746253 +0000 UTC m=+97.788453718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.224634 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.724610211 +0000 UTC m=+97.889317676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.224674 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.225019 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.225378 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.725367779 +0000 UTC m=+97.890075244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.256772 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:53 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:53 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:53 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.257167 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.326208 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.326633 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.826582326 +0000 UTC m=+97.991289811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.326743 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.327197 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.82717898 +0000 UTC m=+97.991886445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.428511 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.428716 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.928690944 +0000 UTC m=+98.093398409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.428853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.429220 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.929208987 +0000 UTC m=+98.093916452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.436372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" event={"ID":"466a813e-97dd-4113-b15c-1e0216edca40","Type":"ContainerStarted","Data":"c76f7d82e7032dc4b3ac8bf3044be581dde5e4f417e82edf564a451e3cfc7f1a"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.439680 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" event={"ID":"c9544187-4d8b-4764-bfdb-067d6d6d06b4","Type":"ContainerStarted","Data":"5e58a6ce93af4848462a80240a626a91fc4d1f5d8ac87abf28e4213177300903"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.443341 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" event={"ID":"119edb68-a6b6-4bdf-9f74-c14211a24ecd","Type":"ContainerStarted","Data":"b8a1fdab4fb968bb12d219fe85fc09650cec7cf98aa079983715ceb8a4e74fd7"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.443386 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" event={"ID":"119edb68-a6b6-4bdf-9f74-c14211a24ecd","Type":"ContainerStarted","Data":"4611d303439c93e0dbfec5fc69dabc2d2ec7db08bdcbfcd2b14b6ece9d0d16ce"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.443555 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.451059 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" event={"ID":"480d13a8-eecc-4614-9b43-fd3fb5f28695","Type":"ContainerStarted","Data":"51322341039e5bef7815e1c3fcee6f14bc93a4c4e2b1ce327c1c5c4c4e3d0ee1"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.452167 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.460672 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt84d" event={"ID":"ab1b8e08-3212-4197-a8e7-db12babb6414","Type":"ContainerStarted","Data":"016463c4e561813efadc381670a8731264d6a825a500650bfd514d2f13e1b85e"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.461406 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.463473 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerStarted","Data":"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.464775 4995 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phjts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.464812 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.468179 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"db9dedffe35b3e14d9b890a473e94becd65659f6848fe2b495fae97ea15495f8"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.471655 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" event={"ID":"41fedfb8-9381-43a2-8f78-2dea53ad7882","Type":"ContainerStarted","Data":"2dcf22460f6e4ed87e1c53aba10862935a62e364f9ed56650a002a271b8a9cf2"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.484320 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" podStartSLOduration=76.484299574 podStartE2EDuration="1m16.484299574s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.469077975 +0000 UTC m=+97.633785440" watchObservedRunningTime="2026-01-26 23:09:53.484299574 +0000 UTC m=+97.649007039" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.493601 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerStarted","Data":"5059d7545e626d7ede92cdac2582a6bca1d3973652183fc2821a373d3b81e3ff"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.501902 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" event={"ID":"3272988d-332d-4fe7-a794-c262bb6d8e11","Type":"ContainerStarted","Data":"0cb60e301035a827dcf1c548a706e468599bc733dfaaa2f95fd3d095cad45f22"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.518222 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.522178 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.530885 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.533024 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.032994636 +0000 UTC m=+98.197702111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.533783 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.534662 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" podStartSLOduration=76.534640426 podStartE2EDuration="1m16.534640426s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.517668274 +0000 UTC m=+97.682375749" watchObservedRunningTime="2026-01-26 23:09:53.534640426 +0000 UTC m=+97.699347891" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.571045 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" podStartSLOduration=75.571030059 podStartE2EDuration="1m15.571030059s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.570585018 +0000 UTC m=+97.735292493" watchObservedRunningTime="2026-01-26 23:09:53.571030059 +0000 UTC m=+97.735737524" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.572226 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.637789 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wt84d" podStartSLOduration=8.637773598999999 podStartE2EDuration="8.637773599s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.627134031 +0000 UTC m=+97.791841496" watchObservedRunningTime="2026-01-26 23:09:53.637773599 +0000 UTC m=+97.802481064" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.643984 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.649820 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.149805231 +0000 UTC m=+98.314512696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.696527 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" podStartSLOduration=76.696504595 podStartE2EDuration="1m16.696504595s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.679661416 +0000 UTC m=+97.844368891" watchObservedRunningTime="2026-01-26 23:09:53.696504595 +0000 UTC m=+97.861212060" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.732001 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" podStartSLOduration=76.731983936 podStartE2EDuration="1m16.731983936s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.729535896 +0000 UTC m=+97.894243361" watchObservedRunningTime="2026-01-26 23:09:53.731983936 +0000 UTC m=+97.896691401" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.746607 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.747127 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.247089642 +0000 UTC m=+98.411797117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.851598 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.851951 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.351933677 +0000 UTC m=+98.516641142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.957667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.958053 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.458037432 +0000 UTC m=+98.622744897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.989847 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v665q" podStartSLOduration=76.989824494 podStartE2EDuration="1m16.989824494s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.986583405 +0000 UTC m=+98.151290870" watchObservedRunningTime="2026-01-26 23:09:53.989824494 +0000 UTC m=+98.154531969" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.061195 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.061728 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.561707759 +0000 UTC m=+98.726415264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.162543 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.162726 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.662678659 +0000 UTC m=+98.827386124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.162917 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.163334 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.663322665 +0000 UTC m=+98.828030130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.258230 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:54 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:54 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:54 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.258280 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.264074 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.264286 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.764236434 +0000 UTC m=+98.928943899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.264404 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.264803 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.764791478 +0000 UTC m=+98.929498963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.365606 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.365753 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.865724538 +0000 UTC m=+99.030432013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.366064 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.366414 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.866401924 +0000 UTC m=+99.031109439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.467492 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.467664 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.967638281 +0000 UTC m=+99.132345746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.467792 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.468134 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.968121173 +0000 UTC m=+99.132828638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.502772 4995 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nglhh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.502874 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" podUID="7943ea01-9b7a-4a9b-9b13-6ef8203dd43b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.509343 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"854c65a3bfabc7f2794f203aa6c37b4a6cf9d9e8fd8618fb94935ccaf11827d4"} Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.509383 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"341fe03dc3df1280b812fa605335212c613d14c06e4acdb6a55481a1b888361c"} Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.510169 4995 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phjts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.510211 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.569025 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.569267 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.069240797 +0000 UTC m=+99.233948262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.569750 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.571298 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.071285217 +0000 UTC m=+99.235992682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.630138 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.673536 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.673769 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.173742684 +0000 UTC m=+99.338450149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.673956 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.674305 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.174292097 +0000 UTC m=+99.338999562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.774868 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.775048 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.275022762 +0000 UTC m=+99.439730227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.775174 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.775593 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.275579885 +0000 UTC m=+99.440287360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.784794 4995 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.876193 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.876411 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.376378882 +0000 UTC m=+99.541086347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.876690 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.876987 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.376975416 +0000 UTC m=+99.541682871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.978012 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.978598 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.478583322 +0000 UTC m=+99.643290787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.080318 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.080693 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.58067795 +0000 UTC m=+99.745385415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.181699 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.182077 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.682060491 +0000 UTC m=+99.846767966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.256285 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.257196 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.259320 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.260210 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:55 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:55 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:55 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.260259 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.282702 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.283025 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.783009391 +0000 UTC m=+99.947716846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.315724 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.383884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.384063 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.884037892 +0000 UTC m=+100.048745357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384181 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384240 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384355 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384396 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.384647 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.884639246 +0000 UTC m=+100.049346701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.455386 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.456215 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.458057 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.465203 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485560 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485839 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485913 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485934 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485977 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.486566 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.98653594 +0000 UTC m=+100.151243405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.486857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.487170 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.492855 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.502646 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.516853 4995 generic.go:334] "Generic (PLEG): container finished" podID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerID="a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166" exitCode=0 Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.516904 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerDied","Data":"a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166"} Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.518807 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"0590f49f70ec5bb7f49bc800e9731d7932646b95b6b71338d271b90ff8efccfc"} Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.551386 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" podStartSLOduration=10.551372283 podStartE2EDuration="10.551372283s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:55.54999368 +0000 UTC m=+99.714701145" watchObservedRunningTime="2026-01-26 23:09:55.551372283 +0000 UTC m=+99.716079748" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.575708 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.592827 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.592867 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.594200 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.594273 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.605389 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:56.105346483 +0000 UTC m=+100.270053948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.637477 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.644167 4995 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T23:09:54.78481741Z","Handler":null,"Name":""} Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.647417 4995 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.647461 4995 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.657322 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.662543 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.671390 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.703877 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.704356 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.704462 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.704544 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.705006 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.705474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.708577 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.721899 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.772668 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.802199 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805355 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805442 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805460 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805479 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: W0126 23:09:55.809796 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58513b5e_460e_4344_91e3_1d20e26fd533.slice/crio-f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7 WatchSource:0}: Error finding container f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7: Status 404 returned error can't find the container with id f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7 Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.813207 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.813252 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.854064 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.855409 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.868571 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.879360 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.906555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.906589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.906609 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.907195 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.907465 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.915430 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vlmfg"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.925492 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.980161 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.008080 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.008156 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.008229 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.019065 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.026648 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.109889 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.109972 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.109992 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.110571 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.110644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.127352 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.181701 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.229836 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.257058 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:56 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:56 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:56 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.257135 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.291509 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.419499 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.527730 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.531220 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" exitCode=0 Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.531400 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.531458 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerStarted","Data":"a9d19028654a4b4f323d0e8da8ba08742825da3af7b48d707205e793ef542ae5"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.536287 4995 generic.go:334] "Generic (PLEG): container finished" podID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" exitCode=0 Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.536338 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.536372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerStarted","Data":"20ff719d1a611af55cb9cea51a19289e5f98717222c932e73ed4f4672c8a5fcb"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.543181 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.543764 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" event={"ID":"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4","Type":"ContainerStarted","Data":"77a0650a9a37800b30025eaa5c17f734f4cf3685d82638b32ea776da6a52ebb1"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.543800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" event={"ID":"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4","Type":"ContainerStarted","Data":"8116ceea19f379c95631b0c94377eb4636083008b47db384584120c2df5f151d"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.546613 4995 generic.go:334] "Generic (PLEG): container finished" podID="58513b5e-460e-4344-91e3-1d20e26fd533" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" exitCode=0 Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.546671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.546704 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerStarted","Data":"f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.550059 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerStarted","Data":"5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.550084 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerStarted","Data":"c0781d7b5c2499fcb553527a8fd295fe436cb8680c543a89922297ff4d9b554f"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.550296 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.711520 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" podStartSLOduration=79.711497221 podStartE2EDuration="1m19.711497221s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:56.705806403 +0000 UTC m=+100.870513868" watchObservedRunningTime="2026-01-26 23:09:56.711497221 +0000 UTC m=+100.876204686" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.836500 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.927807 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"7de4fe23-2da4-47df-a68b-d6d5148ab964\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.927925 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"7de4fe23-2da4-47df-a68b-d6d5148ab964\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.927974 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"7de4fe23-2da4-47df-a68b-d6d5148ab964\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.928659 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume" (OuterVolumeSpecName: "config-volume") pod "7de4fe23-2da4-47df-a68b-d6d5148ab964" (UID: "7de4fe23-2da4-47df-a68b-d6d5148ab964"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.935914 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9" (OuterVolumeSpecName: "kube-api-access-77hg9") pod "7de4fe23-2da4-47df-a68b-d6d5148ab964" (UID: "7de4fe23-2da4-47df-a68b-d6d5148ab964"). InnerVolumeSpecName "kube-api-access-77hg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.936007 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7de4fe23-2da4-47df-a68b-d6d5148ab964" (UID: "7de4fe23-2da4-47df-a68b-d6d5148ab964"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.029913 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.030281 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.030291 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") on node \"crc\" DevicePath \"\"" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.038682 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 23:09:57 crc kubenswrapper[4995]: E0126 23:09:57.038899 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerName="collect-profiles" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.038911 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerName="collect-profiles" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.038999 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerName="collect-profiles" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.039391 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.044023 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.044277 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.082763 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.131053 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.131275 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.178980 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.232370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.232465 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.232484 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.257657 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:57 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:57 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:57 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.257767 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.258194 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.368303 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.454670 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.455814 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.461437 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.464016 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.518598 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.518956 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.518598 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.519218 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.535931 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.536000 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.536045 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.572049 4995 generic.go:334] "Generic (PLEG): container finished" podID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" exitCode=0 Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.572139 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.572166 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerStarted","Data":"20edf153be996b3cf630c557f436ea3736b0f71a5fce8a127880088910f8cf24"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.585392 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" event={"ID":"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4","Type":"ContainerStarted","Data":"a676ce9e45e110b934eacc0ed00833fb54699f6e8cba6d363a94925b526491d1"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.607189 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.610307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerDied","Data":"052973f6fc62d2870635d2389e1e0d1e76e71a306a0edffd354da85ca2cc2015"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.610346 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052973f6fc62d2870635d2389e1e0d1e76e71a306a0edffd354da85ca2cc2015" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.621413 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vlmfg" podStartSLOduration=80.621366435 podStartE2EDuration="1m20.621366435s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:57.619864339 +0000 UTC m=+101.784571814" watchObservedRunningTime="2026-01-26 23:09:57.621366435 +0000 UTC m=+101.786073900" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.637637 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.637720 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.637767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.638290 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.640230 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.657796 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.683884 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.791514 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.851584 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.852746 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.866249 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.941897 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.942292 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.942463 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.046769 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.046865 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.046931 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.047559 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.048712 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.092640 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.135444 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.187317 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.252616 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.256126 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:58 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:58 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:58 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.256182 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.457311 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.458241 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.458264 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.458574 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.463783 4995 patch_prober.go:28] interesting pod/console-f9d7485db-zt9nn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.463866 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zt9nn" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.463952 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.474089 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.479420 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.479463 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.494024 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.583025 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.583084 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.583170 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.614113 4995 generic.go:334] "Generic (PLEG): container finished" podID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerID="85a6ad96b2e4587219604eaba4bfd026549d008f8e0ae682f8638f5bace71ac2" exitCode=0 Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.614421 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65","Type":"ContainerDied","Data":"85a6ad96b2e4587219604eaba4bfd026549d008f8e0ae682f8638f5bace71ac2"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.614451 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65","Type":"ContainerStarted","Data":"265077d657bfb7c86ffcaaca72051ddeb65fd20c0b30c89bc3a9372759d0789f"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.629078 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerStarted","Data":"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.629129 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerStarted","Data":"2791ea2f560df413a781ffdcf254d63067a2528c47ab19f2d416f080d3de6868"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.638262 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684194 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684261 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684299 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.686269 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.761420 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.788010 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.856210 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.861548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.863499 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.872303 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.990979 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.991449 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.991675 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.092834 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.093277 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.093318 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.093896 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.094076 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.138471 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.202430 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.221403 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.227276 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.261766 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:59 crc kubenswrapper[4995]: [+]has-synced ok Jan 26 23:09:59 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:59 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.261835 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:59 crc kubenswrapper[4995]: W0126 23:09:59.267990 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5166d9b5_534e_4426_8085_a1900c7bdafb.slice/crio-e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd WatchSource:0}: Error finding container e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd: Status 404 returned error can't find the container with id e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.636635 4995 generic.go:334] "Generic (PLEG): container finished" podID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" exitCode=0 Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.636713 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.639258 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerStarted","Data":"e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.640815 4995 generic.go:334] "Generic (PLEG): container finished" podID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" exitCode=0 Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.641504 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.641523 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerStarted","Data":"a0635a7bf961355bc048d1c04e92285d7c8c240f172e625a758ab7fa01b816d1"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.730166 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:09:59 crc kubenswrapper[4995]: W0126 23:09:59.769053 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf84b12_2476_4bdf_92f2_016c722f74b5.slice/crio-d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29 WatchSource:0}: Error finding container d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29: Status 404 returned error can't find the container with id d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29 Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.804846 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.807677 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.809997 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.811510 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.811621 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.908855 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.909064 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.973333 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.010295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.010370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.010476 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.046776 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.111664 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.111754 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.111885 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" (UID: "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.112299 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.115133 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" (UID: "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.133385 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.213593 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.256563 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.260299 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.637142 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.652185 4995 generic.go:334] "Generic (PLEG): container finished" podID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" exitCode=0 Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.652264 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.661620 4995 generic.go:334] "Generic (PLEG): container finished" podID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" exitCode=0 Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.661706 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.661741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerStarted","Data":"d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.668191 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65","Type":"ContainerDied","Data":"265077d657bfb7c86ffcaaca72051ddeb65fd20c0b30c89bc3a9372759d0789f"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.668241 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265077d657bfb7c86ffcaaca72051ddeb65fd20c0b30c89bc3a9372759d0789f" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.668204 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:10:01 crc kubenswrapper[4995]: I0126 23:10:01.706972 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerStarted","Data":"975487f2a635a929bba403332114f06fc9e164d81ecc0aa07e72ed358806c284"} Jan 26 23:10:01 crc kubenswrapper[4995]: I0126 23:10:01.707309 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerStarted","Data":"55757148a77a95d861771d7e26e070797711057e63a4fc15d0c0698103b9e006"} Jan 26 23:10:01 crc kubenswrapper[4995]: I0126 23:10:01.726446 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.726425162 podStartE2EDuration="2.726425162s" podCreationTimestamp="2026-01-26 23:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:10:01.725572931 +0000 UTC m=+105.890280406" watchObservedRunningTime="2026-01-26 23:10:01.726425162 +0000 UTC m=+105.891132627" Jan 26 23:10:02 crc kubenswrapper[4995]: I0126 23:10:02.717779 4995 generic.go:334] "Generic (PLEG): container finished" podID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerID="975487f2a635a929bba403332114f06fc9e164d81ecc0aa07e72ed358806c284" exitCode=0 Jan 26 23:10:02 crc kubenswrapper[4995]: I0126 23:10:02.718167 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerDied","Data":"975487f2a635a929bba403332114f06fc9e164d81ecc0aa07e72ed358806c284"} Jan 26 23:10:03 crc kubenswrapper[4995]: I0126 23:10:03.975324 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wt84d" Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517004 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517285 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517022 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517705 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:10:08 crc kubenswrapper[4995]: I0126 23:10:08.604784 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:10:08 crc kubenswrapper[4995]: I0126 23:10:08.620752 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.532783 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.577214 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.577274 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.577665 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" (UID: "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.582812 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" (UID: "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.680596 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.680664 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.814742 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerDied","Data":"55757148a77a95d861771d7e26e070797711057e63a4fc15d0c0698103b9e006"} Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.814790 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55757148a77a95d861771d7e26e070797711057e63a4fc15d0c0698103b9e006" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.814851 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:16 crc kubenswrapper[4995]: I0126 23:10:16.032085 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:10:17 crc kubenswrapper[4995]: I0126 23:10:17.521469 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:10:29 crc kubenswrapper[4995]: I0126 23:10:29.197658 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:10:33 crc kubenswrapper[4995]: E0126 23:10:33.774658 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 23:10:33 crc kubenswrapper[4995]: E0126 23:10:33.774938 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df626,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wq2hm_openshift-marketplace(5166d9b5-534e-4426-8085-a1900c7bdafb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:33 crc kubenswrapper[4995]: E0126 23:10:33.776263 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wq2hm" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.328856 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wq2hm" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.417635 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.418172 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p76nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7ptdh_openshift-marketplace(869a6dc6-8120-4a1c-b424-1a06738aa55e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.419427 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7ptdh" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.197157 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 23:10:36 crc kubenswrapper[4995]: E0126 23:10:36.197566 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.197668 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: E0126 23:10:36.197742 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.197817 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.198012 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.198135 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.198631 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.206067 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.206161 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.212231 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.340973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.341036 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.442737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.442878 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.443335 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.465366 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.553392 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.159516 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7ptdh" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.229854 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.230036 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxr7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q6mtp_openshift-marketplace(6aacdfb4-d893-49a9-ae77-a150f1c0a430): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.231545 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q6mtp" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.349177 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.349579 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c27fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-px4t9_openshift-marketplace(38be674d-6ae2-441d-b361-a9eea3b694a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.350912 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-px4t9" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.370864 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.370991 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbb4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8z855_openshift-marketplace(b7295e1f-e3cb-4710-8763-b02b3e9ed67b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.372239 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8z855" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.429072 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.429272 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbvbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6wf22_openshift-marketplace(58513b5e-460e-4344-91e3-1d20e26fd533): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.430477 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6wf22" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" Jan 26 23:10:37 crc kubenswrapper[4995]: I0126 23:10:37.448363 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 23:10:37 crc kubenswrapper[4995]: W0126 23:10:37.460230 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda88ce357_60cd_42cf_9482_f256204a2d72.slice/crio-0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e WatchSource:0}: Error finding container 0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e: Status 404 returned error can't find the container with id 0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.007070 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerStarted","Data":"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf"} Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.010204 4995 generic.go:334] "Generic (PLEG): container finished" podID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" exitCode=0 Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.010324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0"} Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.015060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerStarted","Data":"cb8107746b4601bf6993601dcd300e1d12614297b3e959491c279ed96b11e4c8"} Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.015161 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerStarted","Data":"0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e"} Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.017903 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8z855" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.020222 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q6mtp" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.020270 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6wf22" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.026018 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-px4t9" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.146924 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.146905848 podStartE2EDuration="2.146905848s" podCreationTimestamp="2026-01-26 23:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:10:38.142479221 +0000 UTC m=+142.307186686" watchObservedRunningTime="2026-01-26 23:10:38.146905848 +0000 UTC m=+142.311613313" Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.022970 4995 generic.go:334] "Generic (PLEG): container finished" podID="a88ce357-60cd-42cf-9482-f256204a2d72" containerID="cb8107746b4601bf6993601dcd300e1d12614297b3e959491c279ed96b11e4c8" exitCode=0 Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.023133 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerDied","Data":"cb8107746b4601bf6993601dcd300e1d12614297b3e959491c279ed96b11e4c8"} Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.029784 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerStarted","Data":"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027"} Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.032091 4995 generic.go:334] "Generic (PLEG): container finished" podID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" exitCode=0 Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.032162 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf"} Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.075078 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9wv6w" podStartSLOduration=3.282365653 podStartE2EDuration="42.075060013s" podCreationTimestamp="2026-01-26 23:09:57 +0000 UTC" firstStartedPulling="2026-01-26 23:09:59.64243597 +0000 UTC m=+103.807143435" lastFinishedPulling="2026-01-26 23:10:38.43513033 +0000 UTC m=+142.599837795" observedRunningTime="2026-01-26 23:10:39.071877159 +0000 UTC m=+143.236584644" watchObservedRunningTime="2026-01-26 23:10:39.075060013 +0000 UTC m=+143.239767478" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.038359 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerStarted","Data":"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb"} Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.278718 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.295365 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tkghc" podStartSLOduration=4.477526972 podStartE2EDuration="42.295345824s" podCreationTimestamp="2026-01-26 23:09:58 +0000 UTC" firstStartedPulling="2026-01-26 23:10:01.709194683 +0000 UTC m=+105.873902148" lastFinishedPulling="2026-01-26 23:10:39.527013525 +0000 UTC m=+143.691721000" observedRunningTime="2026-01-26 23:10:40.058649845 +0000 UTC m=+144.223357310" watchObservedRunningTime="2026-01-26 23:10:40.295345824 +0000 UTC m=+144.460053289" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.458780 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"a88ce357-60cd-42cf-9482-f256204a2d72\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.458877 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a88ce357-60cd-42cf-9482-f256204a2d72" (UID: "a88ce357-60cd-42cf-9482-f256204a2d72"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.458896 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"a88ce357-60cd-42cf-9482-f256204a2d72\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.459440 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.464495 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a88ce357-60cd-42cf-9482-f256204a2d72" (UID: "a88ce357-60cd-42cf-9482-f256204a2d72"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.560344 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.893278 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.893626 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.996653 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 23:10:40 crc kubenswrapper[4995]: E0126 23:10:40.996966 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88ce357-60cd-42cf-9482-f256204a2d72" containerName="pruner" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.996983 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88ce357-60cd-42cf-9482-f256204a2d72" containerName="pruner" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.997180 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88ce357-60cd-42cf-9482-f256204a2d72" containerName="pruner" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.997718 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.006028 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.043969 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerDied","Data":"0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e"} Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.044004 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.044049 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.066837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.066972 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.067004 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168129 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168201 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168264 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168279 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168369 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.186079 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.317866 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.522520 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 23:10:41 crc kubenswrapper[4995]: W0126 23:10:41.530661 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbd391cd1_35c1_4ee8_98a3_80c0d9cec0e9.slice/crio-357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79 WatchSource:0}: Error finding container 357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79: Status 404 returned error can't find the container with id 357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79 Jan 26 23:10:42 crc kubenswrapper[4995]: I0126 23:10:42.050872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerStarted","Data":"0df4d5b5f690365e5d4a48931cdee454a300ba6752a514b09c733175475487c8"} Jan 26 23:10:42 crc kubenswrapper[4995]: I0126 23:10:42.050926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerStarted","Data":"357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79"} Jan 26 23:10:42 crc kubenswrapper[4995]: I0126 23:10:42.066161 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.066137723 podStartE2EDuration="2.066137723s" podCreationTimestamp="2026-01-26 23:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:10:42.063930405 +0000 UTC m=+146.228637870" watchObservedRunningTime="2026-01-26 23:10:42.066137723 +0000 UTC m=+146.230845188" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610227 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610762 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610824 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610883 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.612001 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.614761 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.614885 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.621769 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.622773 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.629095 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.635626 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.638244 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.742767 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.757207 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.771546 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:45 crc kubenswrapper[4995]: I0126 23:10:45.068559 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9f5c6275c4a873be12c03f32d919186152fdddb54566874aca136d669b44858d"} Jan 26 23:10:45 crc kubenswrapper[4995]: W0126 23:10:45.298259 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57 WatchSource:0}: Error finding container 916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57: Status 404 returned error can't find the container with id 916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57 Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.078307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c1d526f80651045d33440f9de46e37fea79d0f7d99966fa5859efbe73f04584e"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.078675 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.081668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"44e18be84c50228e1c4ef781aaf7488155aa444f8d111121ce48b2a0ad30dcbe"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.081714 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"898fd627d3440adbd2f5ba505854131403b4f178136a6812b1b76d0f86eb41f7"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.082305 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.084673 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2815508e1c4e25728ff5a6ec781eca97c15b0fad259431fdf47b7efb25ac98f4"} Jan 26 23:10:48 crc kubenswrapper[4995]: I0126 23:10:48.190169 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:48 crc kubenswrapper[4995]: I0126 23:10:48.190500 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:48 crc kubenswrapper[4995]: I0126 23:10:48.327038 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.135451 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.177430 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.205213 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.205278 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.244528 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:50 crc kubenswrapper[4995]: I0126 23:10:50.152292 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.114528 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerStarted","Data":"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706"} Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.114917 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9wv6w" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" containerID="cri-o://d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" gracePeriod=2 Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.359910 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.612455 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.694700 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.694764 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.694820 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.695568 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities" (OuterVolumeSpecName: "utilities") pod "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" (UID: "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.703360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7" (OuterVolumeSpecName: "kube-api-access-jn7v7") pod "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" (UID: "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c"). InnerVolumeSpecName "kube-api-access-jn7v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.717049 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" (UID: "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.796030 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.796059 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.796070 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.121414 4995 generic.go:334] "Generic (PLEG): container finished" podID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.121472 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.123870 4995 generic.go:334] "Generic (PLEG): container finished" podID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.123934 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129122 4995 generic.go:334] "Generic (PLEG): container finished" podID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129166 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129178 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129225 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"a0635a7bf961355bc048d1c04e92285d7c8c240f172e625a758ab7fa01b816d1"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129249 4995 scope.go:117] "RemoveContainer" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.130861 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.131130 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tkghc" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" containerID="cri-o://81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" gracePeriod=2 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.131372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.174560 4995 scope.go:117] "RemoveContainer" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.199717 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.201925 4995 scope.go:117] "RemoveContainer" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.201930 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.316255 4995 scope.go:117] "RemoveContainer" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" Jan 26 23:10:52 crc kubenswrapper[4995]: E0126 23:10:52.316679 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027\": container with ID starting with d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027 not found: ID does not exist" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.316729 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027"} err="failed to get container status \"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027\": rpc error: code = NotFound desc = could not find container \"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027\": container with ID starting with d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027 not found: ID does not exist" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.316778 4995 scope.go:117] "RemoveContainer" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" Jan 26 23:10:52 crc kubenswrapper[4995]: E0126 23:10:52.317263 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0\": container with ID starting with da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0 not found: ID does not exist" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.317293 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0"} err="failed to get container status \"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0\": rpc error: code = NotFound desc = could not find container \"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0\": container with ID starting with da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0 not found: ID does not exist" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.317313 4995 scope.go:117] "RemoveContainer" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" Jan 26 23:10:52 crc kubenswrapper[4995]: E0126 23:10:52.317670 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55\": container with ID starting with e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55 not found: ID does not exist" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.317688 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55"} err="failed to get container status \"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55\": rpc error: code = NotFound desc = could not find container \"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55\": container with ID starting with e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55 not found: ID does not exist" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.480502 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.505240 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"2cf84b12-2476-4bdf-92f2-016c722f74b5\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.505311 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"2cf84b12-2476-4bdf-92f2-016c722f74b5\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.505339 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"2cf84b12-2476-4bdf-92f2-016c722f74b5\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.506401 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities" (OuterVolumeSpecName: "utilities") pod "2cf84b12-2476-4bdf-92f2-016c722f74b5" (UID: "2cf84b12-2476-4bdf-92f2-016c722f74b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.509143 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf" (OuterVolumeSpecName: "kube-api-access-5h6bf") pod "2cf84b12-2476-4bdf-92f2-016c722f74b5" (UID: "2cf84b12-2476-4bdf-92f2-016c722f74b5"). InnerVolumeSpecName "kube-api-access-5h6bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.526744 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" path="/var/lib/kubelet/pods/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c/volumes" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.607338 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.607390 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.644073 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cf84b12-2476-4bdf-92f2-016c722f74b5" (UID: "2cf84b12-2476-4bdf-92f2-016c722f74b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.708652 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.143430 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerStarted","Data":"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.148369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerStarted","Data":"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151369 4995 generic.go:334] "Generic (PLEG): container finished" podID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" exitCode=0 Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151402 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151435 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151458 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151475 4995 scope.go:117] "RemoveContainer" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.153544 4995 generic.go:334] "Generic (PLEG): container finished" podID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" exitCode=0 Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.153585 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.160183 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerStarted","Data":"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.168241 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-px4t9" podStartSLOduration=2.238175267 podStartE2EDuration="56.168224253s" podCreationTimestamp="2026-01-26 23:09:57 +0000 UTC" firstStartedPulling="2026-01-26 23:09:58.63095429 +0000 UTC m=+102.795661755" lastFinishedPulling="2026-01-26 23:10:52.561003276 +0000 UTC m=+156.725710741" observedRunningTime="2026-01-26 23:10:53.163731075 +0000 UTC m=+157.328438540" watchObservedRunningTime="2026-01-26 23:10:53.168224253 +0000 UTC m=+157.332931718" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.184285 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8z855" podStartSLOduration=2.087719936 podStartE2EDuration="58.184272258s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:56.543389591 +0000 UTC m=+100.708097056" lastFinishedPulling="2026-01-26 23:10:52.639941913 +0000 UTC m=+156.804649378" observedRunningTime="2026-01-26 23:10:53.182329586 +0000 UTC m=+157.347037051" watchObservedRunningTime="2026-01-26 23:10:53.184272258 +0000 UTC m=+157.348979723" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.186899 4995 scope.go:117] "RemoveContainer" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.220807 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wq2hm" podStartSLOduration=3.205879869 podStartE2EDuration="55.220790864s" podCreationTimestamp="2026-01-26 23:09:58 +0000 UTC" firstStartedPulling="2026-01-26 23:10:00.657663581 +0000 UTC m=+104.822371046" lastFinishedPulling="2026-01-26 23:10:52.672574576 +0000 UTC m=+156.837282041" observedRunningTime="2026-01-26 23:10:53.21799555 +0000 UTC m=+157.382703015" watchObservedRunningTime="2026-01-26 23:10:53.220790864 +0000 UTC m=+157.385498329" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.232546 4995 scope.go:117] "RemoveContainer" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.252230 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.255249 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.275983 4995 scope.go:117] "RemoveContainer" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" Jan 26 23:10:53 crc kubenswrapper[4995]: E0126 23:10:53.277496 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb\": container with ID starting with 81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb not found: ID does not exist" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277540 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb"} err="failed to get container status \"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb\": rpc error: code = NotFound desc = could not find container \"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb\": container with ID starting with 81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb not found: ID does not exist" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277574 4995 scope.go:117] "RemoveContainer" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" Jan 26 23:10:53 crc kubenswrapper[4995]: E0126 23:10:53.277818 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf\": container with ID starting with 8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf not found: ID does not exist" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277847 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf"} err="failed to get container status \"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf\": rpc error: code = NotFound desc = could not find container \"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf\": container with ID starting with 8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf not found: ID does not exist" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277866 4995 scope.go:117] "RemoveContainer" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" Jan 26 23:10:53 crc kubenswrapper[4995]: E0126 23:10:53.278164 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462\": container with ID starting with 9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462 not found: ID does not exist" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.278208 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462"} err="failed to get container status \"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462\": rpc error: code = NotFound desc = could not find container \"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462\": container with ID starting with 9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462 not found: ID does not exist" Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.168629 4995 generic.go:334] "Generic (PLEG): container finished" podID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" exitCode=0 Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.168713 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359"} Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.173253 4995 generic.go:334] "Generic (PLEG): container finished" podID="58513b5e-460e-4344-91e3-1d20e26fd533" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" exitCode=0 Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.173324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b"} Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.176978 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerStarted","Data":"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899"} Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.226857 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ptdh" podStartSLOduration=3.168975587 podStartE2EDuration="59.226837029s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:57.575841731 +0000 UTC m=+101.740549196" lastFinishedPulling="2026-01-26 23:10:53.633703173 +0000 UTC m=+157.798410638" observedRunningTime="2026-01-26 23:10:54.22458754 +0000 UTC m=+158.389295015" watchObservedRunningTime="2026-01-26 23:10:54.226837029 +0000 UTC m=+158.391544494" Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.524567 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" path="/var/lib/kubelet/pods/2cf84b12-2476-4bdf-92f2-016c722f74b5/volumes" Jan 26 23:10:55 crc kubenswrapper[4995]: I0126 23:10:55.773053 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:10:55 crc kubenswrapper[4995]: I0126 23:10:55.773095 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:10:55 crc kubenswrapper[4995]: I0126 23:10:55.814027 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:10:56 crc kubenswrapper[4995]: I0126 23:10:56.182016 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:56 crc kubenswrapper[4995]: I0126 23:10:56.182077 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:56 crc kubenswrapper[4995]: I0126 23:10:56.235267 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:57 crc kubenswrapper[4995]: I0126 23:10:57.792951 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:57 crc kubenswrapper[4995]: I0126 23:10:57.793276 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:57 crc kubenswrapper[4995]: I0126 23:10:57.846999 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.236315 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.254909 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.791756 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.792085 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.846118 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.202273 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerStarted","Data":"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02"} Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.206163 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerStarted","Data":"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a"} Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.225319 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6mtp" podStartSLOduration=2.788459708 podStartE2EDuration="1m4.225301006s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:56.542868718 +0000 UTC m=+100.707576183" lastFinishedPulling="2026-01-26 23:10:57.979710016 +0000 UTC m=+162.144417481" observedRunningTime="2026-01-26 23:10:59.224323021 +0000 UTC m=+163.389030516" watchObservedRunningTime="2026-01-26 23:10:59.225301006 +0000 UTC m=+163.390008481" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.249238 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6wf22" podStartSLOduration=1.9217418670000002 podStartE2EDuration="1m4.249204649s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:56.548232889 +0000 UTC m=+100.712940354" lastFinishedPulling="2026-01-26 23:10:58.875695671 +0000 UTC m=+163.040403136" observedRunningTime="2026-01-26 23:10:59.243268342 +0000 UTC m=+163.407975807" watchObservedRunningTime="2026-01-26 23:10:59.249204649 +0000 UTC m=+163.413912124" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.273661 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.762355 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:11:00 crc kubenswrapper[4995]: I0126 23:11:00.211226 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7ptdh" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" containerID="cri-o://93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" gracePeriod=2 Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.143351 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220466 4995 generic.go:334] "Generic (PLEG): container finished" podID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" exitCode=0 Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220504 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899"} Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220535 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"20edf153be996b3cf630c557f436ea3736b0f71a5fce8a127880088910f8cf24"} Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220554 4995 scope.go:117] "RemoveContainer" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220587 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.236574 4995 scope.go:117] "RemoveContainer" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.257433 4995 scope.go:117] "RemoveContainer" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.261037 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"869a6dc6-8120-4a1c-b424-1a06738aa55e\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.261184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"869a6dc6-8120-4a1c-b424-1a06738aa55e\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.261270 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"869a6dc6-8120-4a1c-b424-1a06738aa55e\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.262133 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities" (OuterVolumeSpecName: "utilities") pod "869a6dc6-8120-4a1c-b424-1a06738aa55e" (UID: "869a6dc6-8120-4a1c-b424-1a06738aa55e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.267325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx" (OuterVolumeSpecName: "kube-api-access-p76nx") pod "869a6dc6-8120-4a1c-b424-1a06738aa55e" (UID: "869a6dc6-8120-4a1c-b424-1a06738aa55e"). InnerVolumeSpecName "kube-api-access-p76nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.270991 4995 scope.go:117] "RemoveContainer" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" Jan 26 23:11:01 crc kubenswrapper[4995]: E0126 23:11:01.271931 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899\": container with ID starting with 93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899 not found: ID does not exist" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.271964 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899"} err="failed to get container status \"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899\": rpc error: code = NotFound desc = could not find container \"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899\": container with ID starting with 93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899 not found: ID does not exist" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.271984 4995 scope.go:117] "RemoveContainer" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" Jan 26 23:11:01 crc kubenswrapper[4995]: E0126 23:11:01.272459 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576\": container with ID starting with 277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576 not found: ID does not exist" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.272476 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576"} err="failed to get container status \"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576\": rpc error: code = NotFound desc = could not find container \"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576\": container with ID starting with 277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576 not found: ID does not exist" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.272489 4995 scope.go:117] "RemoveContainer" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" Jan 26 23:11:01 crc kubenswrapper[4995]: E0126 23:11:01.272964 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f\": container with ID starting with 86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f not found: ID does not exist" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.272980 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f"} err="failed to get container status \"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f\": rpc error: code = NotFound desc = could not find container \"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f\": container with ID starting with 86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f not found: ID does not exist" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.310056 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "869a6dc6-8120-4a1c-b424-1a06738aa55e" (UID: "869a6dc6-8120-4a1c-b424-1a06738aa55e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.362723 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.362765 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.362778 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.562056 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.572176 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:11:02 crc kubenswrapper[4995]: I0126 23:11:02.526647 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" path="/var/lib/kubelet/pods/869a6dc6-8120-4a1c-b424-1a06738aa55e/volumes" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.576093 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.576556 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.665802 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.842139 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.019324 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.019607 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.052658 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.312420 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.321411 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.418557 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:11:08 crc kubenswrapper[4995]: I0126 23:11:08.769912 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:11:08 crc kubenswrapper[4995]: I0126 23:11:08.770313 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6mtp" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" containerID="cri-o://bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" gracePeriod=2 Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.163120 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.262591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.262636 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.262662 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.263782 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities" (OuterVolumeSpecName: "utilities") pod "6aacdfb4-d893-49a9-ae77-a150f1c0a430" (UID: "6aacdfb4-d893-49a9-ae77-a150f1c0a430"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.271337 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q" (OuterVolumeSpecName: "kube-api-access-dxr7q") pod "6aacdfb4-d893-49a9-ae77-a150f1c0a430" (UID: "6aacdfb4-d893-49a9-ae77-a150f1c0a430"). InnerVolumeSpecName "kube-api-access-dxr7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277504 4995 generic.go:334] "Generic (PLEG): container finished" podID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" exitCode=0 Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277536 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02"} Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277557 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277591 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"20ff719d1a611af55cb9cea51a19289e5f98717222c932e73ed4f4672c8a5fcb"} Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277612 4995 scope.go:117] "RemoveContainer" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.292688 4995 scope.go:117] "RemoveContainer" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.305763 4995 scope.go:117] "RemoveContainer" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.321442 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aacdfb4-d893-49a9-ae77-a150f1c0a430" (UID: "6aacdfb4-d893-49a9-ae77-a150f1c0a430"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.324060 4995 scope.go:117] "RemoveContainer" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" Jan 26 23:11:09 crc kubenswrapper[4995]: E0126 23:11:09.324777 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02\": container with ID starting with bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02 not found: ID does not exist" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.324821 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02"} err="failed to get container status \"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02\": rpc error: code = NotFound desc = could not find container \"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02\": container with ID starting with bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02 not found: ID does not exist" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.324849 4995 scope.go:117] "RemoveContainer" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" Jan 26 23:11:09 crc kubenswrapper[4995]: E0126 23:11:09.325176 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359\": container with ID starting with 67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359 not found: ID does not exist" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.325215 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359"} err="failed to get container status \"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359\": rpc error: code = NotFound desc = could not find container \"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359\": container with ID starting with 67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359 not found: ID does not exist" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.325237 4995 scope.go:117] "RemoveContainer" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" Jan 26 23:11:09 crc kubenswrapper[4995]: E0126 23:11:09.325531 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9\": container with ID starting with ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9 not found: ID does not exist" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.325557 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9"} err="failed to get container status \"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9\": rpc error: code = NotFound desc = could not find container \"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9\": container with ID starting with ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9 not found: ID does not exist" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.363891 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.363940 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.363954 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.601865 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.604913 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:11:10 crc kubenswrapper[4995]: I0126 23:11:10.529083 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" path="/var/lib/kubelet/pods/6aacdfb4-d893-49a9-ae77-a150f1c0a430/volumes" Jan 26 23:11:10 crc kubenswrapper[4995]: I0126 23:11:10.894130 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:11:10 crc kubenswrapper[4995]: I0126 23:11:10.894202 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.501221 4995 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502391 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502422 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502456 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502474 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502503 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502520 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502539 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502558 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502580 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502596 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502624 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502638 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502661 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502675 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502692 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502760 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502785 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502798 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502819 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502831 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502846 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502858 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502875 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502888 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503067 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503094 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503149 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503166 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503746 4995 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504004 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504020 4995 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504349 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504404 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504574 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504588 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504668 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504887 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504905 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504921 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504930 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504943 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504951 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504967 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504975 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504985 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504992 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.505001 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505008 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.505022 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505029 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.505038 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505045 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505186 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505199 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505210 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505218 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505228 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505238 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505433 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.508930 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.538063 4995 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685518 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685594 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685671 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685684 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685703 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786899 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786945 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786974 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786990 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787026 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787054 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787086 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787080 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787146 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787148 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787183 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787158 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787084 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.839018 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: W0126 23:11:19.864248 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9 WatchSource:0}: Error finding container 3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9: Status 404 returned error can't find the container with id 3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9 Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.867064 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e6ac0ca7c71bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,LastTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.335986 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.338405 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339314 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339347 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339361 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339373 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" exitCode=2 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339438 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.342830 4995 generic.go:334] "Generic (PLEG): container finished" podID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerID="0df4d5b5f690365e5d4a48931cdee454a300ba6752a514b09c733175475487c8" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.343005 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerDied","Data":"0df4d5b5f690365e5d4a48931cdee454a300ba6752a514b09c733175475487c8"} Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.344008 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.344797 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae"} Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.344828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9"} Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.345587 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:20 crc kubenswrapper[4995]: E0126 23:11:20.349222 4995 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.357023 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.647814 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.649337 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712321 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712626 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712458 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock" (OuterVolumeSpecName: "var-lock") pod "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" (UID: "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712868 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" (UID: "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.718615 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" (UID: "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.813846 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.813880 4995 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.813891 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.304366 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.305167 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.306328 4995 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.306883 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.371405 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.372358 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" exitCode=0 Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.372425 4995 scope.go:117] "RemoveContainer" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.372497 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.374454 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerDied","Data":"357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79"} Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.374474 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.374513 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.388310 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.388463 4995 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.397980 4995 scope.go:117] "RemoveContainer" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.423883 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.423977 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424007 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424017 4995 scope.go:117] "RemoveContainer" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424081 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424300 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424313 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424554 4995 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.450734 4995 scope.go:117] "RemoveContainer" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.470988 4995 scope.go:117] "RemoveContainer" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.491356 4995 scope.go:117] "RemoveContainer" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.515293 4995 scope.go:117] "RemoveContainer" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.515949 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\": container with ID starting with 1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4 not found: ID does not exist" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516014 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4"} err="failed to get container status \"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\": rpc error: code = NotFound desc = could not find container \"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\": container with ID starting with 1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516042 4995 scope.go:117] "RemoveContainer" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.516816 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\": container with ID starting with 1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561 not found: ID does not exist" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516836 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561"} err="failed to get container status \"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\": rpc error: code = NotFound desc = could not find container \"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\": container with ID starting with 1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516850 4995 scope.go:117] "RemoveContainer" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.517225 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\": container with ID starting with 079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b not found: ID does not exist" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517265 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b"} err="failed to get container status \"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\": rpc error: code = NotFound desc = could not find container \"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\": container with ID starting with 079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517286 4995 scope.go:117] "RemoveContainer" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.517662 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\": container with ID starting with b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6 not found: ID does not exist" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517690 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6"} err="failed to get container status \"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\": rpc error: code = NotFound desc = could not find container \"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\": container with ID starting with b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517710 4995 scope.go:117] "RemoveContainer" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.517917 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\": container with ID starting with bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3 not found: ID does not exist" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517938 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3"} err="failed to get container status \"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\": rpc error: code = NotFound desc = could not find container \"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\": container with ID starting with bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517952 4995 scope.go:117] "RemoveContainer" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.518250 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\": container with ID starting with 701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59 not found: ID does not exist" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.518272 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59"} err="failed to get container status \"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\": rpc error: code = NotFound desc = could not find container \"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\": container with ID starting with 701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.523158 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.525177 4995 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.525200 4995 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.675432 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.675623 4995 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:24 crc kubenswrapper[4995]: I0126 23:11:24.763062 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:11:24 crc kubenswrapper[4995]: I0126 23:11:24.763478 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:24 crc kubenswrapper[4995]: I0126 23:11:24.763841 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.129708 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e6ac0ca7c71bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,LastTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.879638 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.880181 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.880583 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.880870 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.881172 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: I0126 23:11:25.881210 4995 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.881714 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Jan 26 23:11:26 crc kubenswrapper[4995]: E0126 23:11:26.083216 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Jan 26 23:11:26 crc kubenswrapper[4995]: E0126 23:11:26.485072 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Jan 26 23:11:26 crc kubenswrapper[4995]: I0126 23:11:26.519389 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:26 crc kubenswrapper[4995]: I0126 23:11:26.520074 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:27 crc kubenswrapper[4995]: E0126 23:11:27.286050 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Jan 26 23:11:28 crc kubenswrapper[4995]: E0126 23:11:28.887062 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.444347 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" containerID="cri-o://47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" gracePeriod=15 Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.807681 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.808677 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.809222 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.809815 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946177 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946245 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946304 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946373 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947498 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947537 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947562 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947588 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947612 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947586 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947818 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947903 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947938 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947995 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948018 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948050 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948141 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948428 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948460 4995 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948479 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948491 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.950170 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.952928 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.953033 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.954078 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.954161 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.954487 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.956902 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.957295 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.959170 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw" (OuterVolumeSpecName: "kube-api-access-pqvmw") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "kube-api-access-pqvmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.959382 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049862 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049915 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049931 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049946 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049963 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049976 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049989 4995 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050001 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050014 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050028 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050042 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: E0126 23:11:32.088441 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="6.4s" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446175 4995 generic.go:334] "Generic (PLEG): container finished" podID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" exitCode=0 Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446257 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerDied","Data":"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc"} Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerDied","Data":"0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350"} Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446340 4995 scope.go:117] "RemoveContainer" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446382 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.447726 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.448379 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.448669 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.465354 4995 scope.go:117] "RemoveContainer" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.465803 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.466307 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: E0126 23:11:32.466590 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc\": container with ID starting with 47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc not found: ID does not exist" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.466634 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc"} err="failed to get container status \"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc\": rpc error: code = NotFound desc = could not find container \"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc\": container with ID starting with 47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc not found: ID does not exist" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.466640 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.516836 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.517784 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.518124 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.518331 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.537397 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.537444 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:32 crc kubenswrapper[4995]: E0126 23:11:32.537951 4995 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.538484 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.462422 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.462954 4995 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992" exitCode=1 Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.463018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992"} Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.464025 4995 scope.go:117] "RemoveContainer" containerID="cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.464463 4995 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.465331 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.466198 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.466553 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468255 4995 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8a490820995659c15a6b2e4d5b62daa45fdd6e1b431a111f7f7f908b343e4e3a" exitCode=0 Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8a490820995659c15a6b2e4d5b62daa45fdd6e1b431a111f7f7f908b343e4e3a"} Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468341 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d2314e2188faeaf32a6cf21127483f220cdebc766690e36d37235d02e8d87366"} Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468762 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468797 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:33 crc kubenswrapper[4995]: E0126 23:11:33.469207 4995 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.469325 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.469705 4995 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.470218 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.470588 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.475759 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3173b67394f85d47ffea3cd5e2d8b4029e40a41a461ce14859fe410ab00e828a"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.476091 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d699e47f688ad89f79caa3b3fe2034eb852f895f20fe2d9ee960037f30b3ab67"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.476124 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64ff6d259144ecf1b8731927f2c6f62e970ff01c493c170c5de12c617fc05f46"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.476136 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"890902989ae8b982296b97cc98bf5a7ead6595eda9b0820695dbd7ae4fcf527c"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.479998 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.480057 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f6a9084d0f6b2040299d77f9b92e16e0c42ce0333f615cc47331d4182a5a4b9"} Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490371 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"49c79aaa5e427f465e5ab4760f0013c662bac62309e2669c05b123ab10d9765d"} Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490715 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490853 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490877 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:36 crc kubenswrapper[4995]: I0126 23:11:36.641184 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:36 crc kubenswrapper[4995]: I0126 23:11:36.645896 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.502230 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.538961 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.539174 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.546601 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.497040 4995 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.550297 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.550330 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.554350 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.556487 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3f37b1-bc5c-4bcf-8f82-5be5b2c5cb75" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.894253 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.894334 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.894398 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.895344 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.895485 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c" gracePeriod=600 Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.556704 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c" exitCode=0 Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.556798 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c"} Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.556981 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29"} Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.557228 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.557241 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:46 crc kubenswrapper[4995]: I0126 23:11:46.535289 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3f37b1-bc5c-4bcf-8f82-5be5b2c5cb75" Jan 26 23:11:49 crc kubenswrapper[4995]: I0126 23:11:49.887759 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.046082 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.086288 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.719406 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.788981 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.870295 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.247271 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.302640 4995 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.335701 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.572708 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.592304 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.716925 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.845795 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.872643 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.124013 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.200845 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.266590 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.278372 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.329470 4995 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.341175 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.364380 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.443347 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.541531 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.837517 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.842911 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.875413 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.165583 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.203134 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.247585 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.267382 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.346082 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.390428 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.394382 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.562173 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.562231 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.600133 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.662623 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.673736 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.737757 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.842848 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.847220 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.905361 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.021313 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.021377 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.054350 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.134649 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.203353 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.335487 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.486591 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.550309 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.780981 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.820650 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.830626 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.832296 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.894966 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.949079 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.259698 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.279420 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.337571 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.349629 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.453892 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.466690 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.479628 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.531508 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.588734 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.634215 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.673796 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.722532 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.873595 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.040315 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.050511 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.107843 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.163443 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.275842 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.448143 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.492141 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.521488 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.558136 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.635256 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.699136 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.702163 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.703153 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.784234 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.853356 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.908081 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.113294 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.127857 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.142620 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.204410 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.236837 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.310238 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.443950 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.539232 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.710776 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.022890 4995 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.027115 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.027174 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.033391 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.049039 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.049017635 podStartE2EDuration="18.049017635s" podCreationTimestamp="2026-01-26 23:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:11:58.045154793 +0000 UTC m=+222.209862258" watchObservedRunningTime="2026-01-26 23:11:58.049017635 +0000 UTC m=+222.213725110" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.123016 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.160428 4995 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.189306 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.204228 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.241509 4995 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.463738 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.487138 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.525716 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" path="/var/lib/kubelet/pods/4d4d9e36-8d49-41a8-a04b-194a5f652f94/volumes" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.526628 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.603504 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.636228 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.686118 4995 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.732019 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.809346 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.944180 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.949212 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.021236 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.047141 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.138050 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.162836 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.214160 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.398825 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.505588 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.528437 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.607040 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.620300 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.637503 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.759935 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.767047 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.809399 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.866542 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.868983 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.912321 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.931122 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.021053 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043468 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-hq72c"] Jan 26 23:12:00 crc kubenswrapper[4995]: E0126 23:12:00.043727 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043745 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" Jan 26 23:12:00 crc kubenswrapper[4995]: E0126 23:12:00.043768 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerName="installer" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043778 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerName="installer" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043926 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerName="installer" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043946 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.044422 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.046566 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.048390 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.048451 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.048518 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.052066 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.052516 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.052533 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.054073 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.055211 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.055569 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.055901 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.056204 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.063151 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-hq72c"] Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.065855 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.069411 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.074760 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.142238 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.176167 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233048 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233167 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233361 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233420 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233467 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233505 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233542 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233618 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233729 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwrw\" (UniqueName: \"kubernetes.io/projected/8a2cd900-279f-47b1-81d3-19e4c207de72-kube-api-access-6xwrw\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-dir\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233880 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233919 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233978 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-policies\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.290452 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.319415 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.334920 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335027 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335087 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335181 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335242 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335300 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335349 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335430 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwrw\" (UniqueName: \"kubernetes.io/projected/8a2cd900-279f-47b1-81d3-19e4c207de72-kube-api-access-6xwrw\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335481 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-dir\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335535 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335593 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335658 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-policies\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335747 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335798 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.337316 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-dir\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.337933 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.342714 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.343431 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.343776 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-policies\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.344296 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.345368 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.346406 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.347298 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.347342 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.347959 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.356521 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.358704 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.358854 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwrw\" (UniqueName: \"kubernetes.io/projected/8a2cd900-279f-47b1-81d3-19e4c207de72-kube-api-access-6xwrw\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.364763 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.421344 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.469387 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.562661 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-hq72c"] Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.605265 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.636886 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.665681 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.686517 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" event={"ID":"8a2cd900-279f-47b1-81d3-19e4c207de72","Type":"ContainerStarted","Data":"831552b2f815a751cb8434e8e3e9309051b7d1fa08658e6e65c83a47fd59f0d2"} Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.710705 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.732646 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.736281 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.904340 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.960701 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.010568 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.010685 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.078610 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.091917 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.199527 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.211133 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.227919 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.264213 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.277924 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.294723 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.305734 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.364752 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.372054 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.428541 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.458590 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.486810 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.525244 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.639551 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.652418 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.693973 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" event={"ID":"8a2cd900-279f-47b1-81d3-19e4c207de72","Type":"ContainerStarted","Data":"24ded29b0787650485f0c2dd3b548ee5ac51fdb8310c27bc4a8b08f43bd44930"} Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.694325 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.701762 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.729260 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" podStartSLOduration=55.72923481 podStartE2EDuration="55.72923481s" podCreationTimestamp="2026-01-26 23:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:01.725360467 +0000 UTC m=+225.890067942" watchObservedRunningTime="2026-01-26 23:12:01.72923481 +0000 UTC m=+225.893942285" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.819955 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.911705 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.966395 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.996947 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.003094 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.046309 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.125272 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.179913 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.313006 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.566511 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.664366 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.692055 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.728376 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.776617 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.779071 4995 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.779402 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" gracePeriod=5 Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.814405 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.850824 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.901326 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.901628 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.925174 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.935227 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.994725 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.080990 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.087593 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.211975 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.316366 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.338142 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.421738 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.457699 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.500322 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.542741 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.556333 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.560862 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.751855 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.850145 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.998588 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.082054 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.087448 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.097183 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.097404 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.116486 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.163271 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.203623 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.218882 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.250243 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.257858 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.337738 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.367638 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.506220 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.561013 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.575328 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.585246 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.602918 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.697018 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.835546 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.895847 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.022860 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.148735 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.196269 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.257008 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.283062 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.420797 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.455876 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.459949 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.496837 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.535908 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.560799 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.707341 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.808774 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.834152 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.084129 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.126386 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.313654 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.351343 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.685262 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.995487 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 23:12:07 crc kubenswrapper[4995]: I0126 23:12:07.155910 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.385391 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.385784 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449595 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449648 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449725 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449799 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449847 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449883 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449953 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450009 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450090 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450330 4995 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450355 4995 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450367 4995 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450377 4995 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.456917 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.523851 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.551741 4995 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.735927 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.736025 4995 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" exitCode=137 Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.736161 4995 scope.go:117] "RemoveContainer" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.736186 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.758825 4995 scope.go:117] "RemoveContainer" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" Jan 26 23:12:08 crc kubenswrapper[4995]: E0126 23:12:08.759546 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae\": container with ID starting with cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae not found: ID does not exist" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.759604 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae"} err="failed to get container status \"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae\": rpc error: code = NotFound desc = could not find container \"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae\": container with ID starting with cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae not found: ID does not exist" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.012975 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.013763 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" containerID="cri-o://f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" gracePeriod=30 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.116572 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.117120 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" containerID="cri-o://6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" gracePeriod=30 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.441670 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607694 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607761 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607803 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607874 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.608677 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config" (OuterVolumeSpecName: "config") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.609208 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.609539 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.609561 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.613891 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.614269 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv" (OuterVolumeSpecName: "kube-api-access-dj7jv") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "kube-api-access-dj7jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.710990 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.711022 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.820112 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854113 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" exitCode=0 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854452 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854629 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerDied","Data":"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerDied","Data":"d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854846 4995 scope.go:117] "RemoveContainer" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.869993 4995 generic.go:334] "Generic (PLEG): container finished" podID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" exitCode=0 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.870280 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerDied","Data":"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.870445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerDied","Data":"8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.870558 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.882759 4995 scope.go:117] "RemoveContainer" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" Jan 26 23:12:28 crc kubenswrapper[4995]: E0126 23:12:28.883483 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4\": container with ID starting with 6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4 not found: ID does not exist" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.883722 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4"} err="failed to get container status \"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4\": rpc error: code = NotFound desc = could not find container \"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4\": container with ID starting with 6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4 not found: ID does not exist" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.883844 4995 scope.go:117] "RemoveContainer" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.897074 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.901793 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.902461 4995 scope.go:117] "RemoveContainer" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" Jan 26 23:12:28 crc kubenswrapper[4995]: E0126 23:12:28.902900 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d\": container with ID starting with f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d not found: ID does not exist" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.902992 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d"} err="failed to get container status \"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d\": rpc error: code = NotFound desc = could not find container \"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d\": container with ID starting with f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d not found: ID does not exist" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015090 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015151 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015200 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015294 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.016343 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.016372 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.016484 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config" (OuterVolumeSpecName: "config") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.019176 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6" (OuterVolumeSpecName: "kube-api-access-pfdf6") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "kube-api-access-pfdf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.019743 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117069 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117201 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117217 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117228 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117241 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.193846 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.200911 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.341984 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28"] Jan 26 23:12:29 crc kubenswrapper[4995]: E0126 23:12:29.342292 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342311 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: E0126 23:12:29.342328 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342335 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: E0126 23:12:29.342347 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342352 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342445 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342459 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342471 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342834 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.343768 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.344333 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.345247 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.345489 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349802 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349856 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349930 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349802 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.350271 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.350618 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.352377 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.352904 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.353198 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.353311 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.356050 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.358621 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.358904 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528179 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-config\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528223 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528244 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528263 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlsm5\" (UniqueName: \"kubernetes.io/projected/ac827692-56fd-45a9-8cc5-48a6a6c87eac-kube-api-access-xlsm5\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528299 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528320 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528345 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-client-ca\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528365 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528391 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac827692-56fd-45a9-8cc5-48a6a6c87eac-serving-cert\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.629818 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlsm5\" (UniqueName: \"kubernetes.io/projected/ac827692-56fd-45a9-8cc5-48a6a6c87eac-kube-api-access-xlsm5\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.629995 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630208 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-client-ca\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630243 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630315 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac827692-56fd-45a9-8cc5-48a6a6c87eac-serving-cert\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630363 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-config\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630406 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630451 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.632374 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-config\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.632806 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-client-ca\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.632997 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.633193 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.633277 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.639342 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac827692-56fd-45a9-8cc5-48a6a6c87eac-serving-cert\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.647084 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.652676 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.655687 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlsm5\" (UniqueName: \"kubernetes.io/projected/ac827692-56fd-45a9-8cc5-48a6a6c87eac-kube-api-access-xlsm5\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.727611 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.739346 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.001681 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.158005 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28"] Jan 26 23:12:30 crc kubenswrapper[4995]: W0126 23:12:30.165535 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac827692_56fd_45a9_8cc5_48a6a6c87eac.slice/crio-64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1 WatchSource:0}: Error finding container 64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1: Status 404 returned error can't find the container with id 64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1 Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.524868 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" path="/var/lib/kubelet/pods/1fb6bf0f-13dc-4a58-853b-98c00142f0bb/volumes" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.526051 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" path="/var/lib/kubelet/pods/7f5c78ad-3088-4100-90ac-f863bb21e4a2/volumes" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.884187 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" event={"ID":"ac827692-56fd-45a9-8cc5-48a6a6c87eac","Type":"ContainerStarted","Data":"0a7508f64534f42cf30fc8d6c7530ab1d9697d46743b2c670f2de3f2d0e1577d"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.884228 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" event={"ID":"ac827692-56fd-45a9-8cc5-48a6a6c87eac","Type":"ContainerStarted","Data":"64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.885345 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.886741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerStarted","Data":"866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.886772 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerStarted","Data":"2928f01136cbe6a2be2b1a77289b1ab6916a7d7784e58242ff500aa4ed967936"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.887146 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.893502 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.901736 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.911501 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" podStartSLOduration=2.911482752 podStartE2EDuration="2.911482752s" podCreationTimestamp="2026-01-26 23:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:30.909859872 +0000 UTC m=+255.074567337" watchObservedRunningTime="2026-01-26 23:12:30.911482752 +0000 UTC m=+255.076190227" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.932142 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" podStartSLOduration=2.932128324 podStartE2EDuration="2.932128324s" podCreationTimestamp="2026-01-26 23:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:30.929917819 +0000 UTC m=+255.094625284" watchObservedRunningTime="2026-01-26 23:12:30.932128324 +0000 UTC m=+255.096835789" Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.448502 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.449248 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" containerID="cri-o://866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213" gracePeriod=30 Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.986983 4995 generic.go:334] "Generic (PLEG): container finished" podID="526f514d-b20f-45b7-b477-198fbb124d43" containerID="866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213" exitCode=0 Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.987073 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerDied","Data":"866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213"} Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.987572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerDied","Data":"2928f01136cbe6a2be2b1a77289b1ab6916a7d7784e58242ff500aa4ed967936"} Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.987588 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2928f01136cbe6a2be2b1a77289b1ab6916a7d7784e58242ff500aa4ed967936" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.015741 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.183964 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.184030 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185036 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca" (OuterVolumeSpecName: "client-ca") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185051 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config" (OuterVolumeSpecName: "config") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185169 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185201 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185223 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185694 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.186230 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.186252 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.186260 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.190762 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.191297 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf" (OuterVolumeSpecName: "kube-api-access-tm9sf") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "kube-api-access-tm9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.287396 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.287462 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.991745 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.018233 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.021663 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.358749 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff868d854-x4qdc"] Jan 26 23:12:50 crc kubenswrapper[4995]: E0126 23:12:50.359438 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.359483 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.359693 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.360405 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.362471 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.363682 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.363965 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.364151 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.364743 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.364802 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.373645 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.373832 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff868d854-x4qdc"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501846 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdqv\" (UniqueName: \"kubernetes.io/projected/b80f458e-de76-46ef-9e85-73791a38b0f7-kube-api-access-zqdqv\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501911 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80f458e-de76-46ef-9e85-73791a38b0f7-serving-cert\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501932 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-client-ca\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501954 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-config\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501971 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-proxy-ca-bundles\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.524034 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526f514d-b20f-45b7-b477-198fbb124d43" path="/var/lib/kubelet/pods/526f514d-b20f-45b7-b477-198fbb124d43/volumes" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdqv\" (UniqueName: \"kubernetes.io/projected/b80f458e-de76-46ef-9e85-73791a38b0f7-kube-api-access-zqdqv\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603605 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80f458e-de76-46ef-9e85-73791a38b0f7-serving-cert\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603633 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-client-ca\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603665 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-config\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603686 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-proxy-ca-bundles\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.605204 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-proxy-ca-bundles\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.605613 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-client-ca\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.606653 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-config\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.608620 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80f458e-de76-46ef-9e85-73791a38b0f7-serving-cert\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.630551 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdqv\" (UniqueName: \"kubernetes.io/projected/b80f458e-de76-46ef-9e85-73791a38b0f7-kube-api-access-zqdqv\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.679815 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.859861 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff868d854-x4qdc"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.997065 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" event={"ID":"b80f458e-de76-46ef-9e85-73791a38b0f7","Type":"ContainerStarted","Data":"dce09e3ab65a11ac1479cd379ccc1aff37b0e6548c52af866d7b6066267832b6"} Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.005404 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" event={"ID":"b80f458e-de76-46ef-9e85-73791a38b0f7","Type":"ContainerStarted","Data":"849492b8200b755cb38c7ef89b4b13fd4cbe5025d5e9aca12e1f15784fa69cc0"} Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.006123 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.010933 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.030962 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" podStartSLOduration=4.030940218 podStartE2EDuration="4.030940218s" podCreationTimestamp="2026-01-26 23:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:52.022134844 +0000 UTC m=+276.186842319" watchObservedRunningTime="2026-01-26 23:12:52.030940218 +0000 UTC m=+276.195647673" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.806431 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8lxhv"] Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.808019 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.822320 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8lxhv"] Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931073 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931150 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40aa93b4-3513-4def-ab82-d438b38e5e92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931172 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-certificates\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931207 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-trusted-ca\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931248 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40aa93b4-3513-4def-ab82-d438b38e5e92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931264 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-tls\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931377 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8lc\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-kube-api-access-pr8lc\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931419 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-bound-sa-token\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.952716 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032368 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40aa93b4-3513-4def-ab82-d438b38e5e92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032414 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-certificates\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032451 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-trusted-ca\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032482 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40aa93b4-3513-4def-ab82-d438b38e5e92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032501 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-tls\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032525 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8lc\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-kube-api-access-pr8lc\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032542 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-bound-sa-token\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.033495 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40aa93b4-3513-4def-ab82-d438b38e5e92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.033891 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-certificates\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.035398 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-trusted-ca\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.037639 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40aa93b4-3513-4def-ab82-d438b38e5e92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.037928 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-tls\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.049251 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-bound-sa-token\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.049910 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8lc\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-kube-api-access-pr8lc\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.134401 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.536611 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8lxhv"] Jan 26 23:13:14 crc kubenswrapper[4995]: W0126 23:13:14.541882 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40aa93b4_3513_4def_ab82_d438b38e5e92.slice/crio-8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672 WatchSource:0}: Error finding container 8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672: Status 404 returned error can't find the container with id 8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672 Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.131945 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" event={"ID":"40aa93b4-3513-4def-ab82-d438b38e5e92","Type":"ContainerStarted","Data":"15e2eb8a7af4db246b80b7bd9e7a0494f5f523a08daa3e093fdc8f1a6582933e"} Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.132324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" event={"ID":"40aa93b4-3513-4def-ab82-d438b38e5e92","Type":"ContainerStarted","Data":"8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672"} Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.133287 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.163322 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" podStartSLOduration=2.163300016 podStartE2EDuration="2.163300016s" podCreationTimestamp="2026-01-26 23:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:13:15.159086959 +0000 UTC m=+299.323794434" watchObservedRunningTime="2026-01-26 23:13:15.163300016 +0000 UTC m=+299.328007481" Jan 26 23:13:16 crc kubenswrapper[4995]: I0126 23:13:16.317083 4995 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.297910 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.298965 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8z855" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" containerID="cri-o://2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.303974 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.304352 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6wf22" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" containerID="cri-o://8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.321363 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.322137 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" containerID="cri-o://ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.337628 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.338176 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-px4t9" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" containerID="cri-o://049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.347356 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.347639 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wq2hm" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" containerID="cri-o://4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.364188 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vsjb7"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.364952 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.375375 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vsjb7"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.448295 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.448843 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfp7\" (UniqueName: \"kubernetes.io/projected/d781053b-fcf3-44a7-812a-8af6c2c1ab07-kube-api-access-zgfp7\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.448886 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.550505 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfp7\" (UniqueName: \"kubernetes.io/projected/d781053b-fcf3-44a7-812a-8af6c2c1ab07-kube-api-access-zgfp7\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.550565 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.550631 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.552288 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.556669 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.569278 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfp7\" (UniqueName: \"kubernetes.io/projected/d781053b-fcf3-44a7-812a-8af6c2c1ab07-kube-api-access-zgfp7\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.748018 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.778060 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.948703 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.955350 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.955397 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.955425 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.956291 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities" (OuterVolumeSpecName: "utilities") pod "b7295e1f-e3cb-4710-8763-b02b3e9ed67b" (UID: "b7295e1f-e3cb-4710-8763-b02b3e9ed67b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.960211 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.984634 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m" (OuterVolumeSpecName: "kube-api-access-jbb4m") pod "b7295e1f-e3cb-4710-8763-b02b3e9ed67b" (UID: "b7295e1f-e3cb-4710-8763-b02b3e9ed67b"). InnerVolumeSpecName "kube-api-access-jbb4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.998151 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.005310 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.029619 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7295e1f-e3cb-4710-8763-b02b3e9ed67b" (UID: "b7295e1f-e3cb-4710-8763-b02b3e9ed67b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056785 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"58513b5e-460e-4344-91e3-1d20e26fd533\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056824 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"38be674d-6ae2-441d-b361-a9eea3b694a7\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056863 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"38be674d-6ae2-441d-b361-a9eea3b694a7\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056908 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"58513b5e-460e-4344-91e3-1d20e26fd533\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056957 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"58513b5e-460e-4344-91e3-1d20e26fd533\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056981 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"38be674d-6ae2-441d-b361-a9eea3b694a7\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057222 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057240 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057255 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057748 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities" (OuterVolumeSpecName: "utilities") pod "58513b5e-460e-4344-91e3-1d20e26fd533" (UID: "58513b5e-460e-4344-91e3-1d20e26fd533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057970 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities" (OuterVolumeSpecName: "utilities") pod "38be674d-6ae2-441d-b361-a9eea3b694a7" (UID: "38be674d-6ae2-441d-b361-a9eea3b694a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.060436 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj" (OuterVolumeSpecName: "kube-api-access-xbvbj") pod "58513b5e-460e-4344-91e3-1d20e26fd533" (UID: "58513b5e-460e-4344-91e3-1d20e26fd533"). InnerVolumeSpecName "kube-api-access-xbvbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.061316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz" (OuterVolumeSpecName: "kube-api-access-c27fz") pod "38be674d-6ae2-441d-b361-a9eea3b694a7" (UID: "38be674d-6ae2-441d-b361-a9eea3b694a7"). InnerVolumeSpecName "kube-api-access-c27fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.081006 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38be674d-6ae2-441d-b361-a9eea3b694a7" (UID: "38be674d-6ae2-441d-b361-a9eea3b694a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.117791 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58513b5e-460e-4344-91e3-1d20e26fd533" (UID: "58513b5e-460e-4344-91e3-1d20e26fd533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.157953 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"5166d9b5-534e-4426-8085-a1900c7bdafb\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158005 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158035 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158056 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158075 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"5166d9b5-534e-4426-8085-a1900c7bdafb\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158110 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"5166d9b5-534e-4426-8085-a1900c7bdafb\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158319 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158330 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158339 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158349 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158357 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158365 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158773 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3f9a7b30-dccb-4753-81a1-622853d6ba3c" (UID: "3f9a7b30-dccb-4753-81a1-622853d6ba3c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158831 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities" (OuterVolumeSpecName: "utilities") pod "5166d9b5-534e-4426-8085-a1900c7bdafb" (UID: "5166d9b5-534e-4426-8085-a1900c7bdafb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.161423 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626" (OuterVolumeSpecName: "kube-api-access-df626") pod "5166d9b5-534e-4426-8085-a1900c7bdafb" (UID: "5166d9b5-534e-4426-8085-a1900c7bdafb"). InnerVolumeSpecName "kube-api-access-df626". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.161670 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf" (OuterVolumeSpecName: "kube-api-access-x4fnf") pod "3f9a7b30-dccb-4753-81a1-622853d6ba3c" (UID: "3f9a7b30-dccb-4753-81a1-622853d6ba3c"). InnerVolumeSpecName "kube-api-access-x4fnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.161882 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3f9a7b30-dccb-4753-81a1-622853d6ba3c" (UID: "3f9a7b30-dccb-4753-81a1-622853d6ba3c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.174943 4995 generic.go:334] "Generic (PLEG): container finished" podID="58513b5e-460e-4344-91e3-1d20e26fd533" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175010 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175015 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175144 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175173 4995 scope.go:117] "RemoveContainer" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180679 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180765 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180792 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"a9d19028654a4b4f323d0e8da8ba08742825da3af7b48d707205e793ef542ae5"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180850 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186333 4995 generic.go:334] "Generic (PLEG): container finished" podID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186391 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerDied","Data":"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186417 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerDied","Data":"f901f601e0243ea0adb58f7b81260269e5e87406c390fbde6045e9147797112d"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186494 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188146 4995 generic.go:334] "Generic (PLEG): container finished" podID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"2791ea2f560df413a781ffdcf254d63067a2528c47ab19f2d416f080d3de6868"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188229 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192741 4995 generic.go:334] "Generic (PLEG): container finished" podID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192774 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192796 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192818 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.206538 4995 scope.go:117] "RemoveContainer" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.209439 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.212206 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.238200 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.253436 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.258428 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259527 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259573 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259584 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259592 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259602 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.265197 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.268322 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.273253 4995 scope.go:117] "RemoveContainer" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.275165 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.280146 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vsjb7"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.287074 4995 scope.go:117] "RemoveContainer" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.287608 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a\": container with ID starting with 8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a not found: ID does not exist" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.287644 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a"} err="failed to get container status \"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a\": rpc error: code = NotFound desc = could not find container \"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a\": container with ID starting with 8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.287671 4995 scope.go:117] "RemoveContainer" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.288155 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b\": container with ID starting with 51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b not found: ID does not exist" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288215 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b"} err="failed to get container status \"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b\": rpc error: code = NotFound desc = could not find container \"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b\": container with ID starting with 51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288243 4995 scope.go:117] "RemoveContainer" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.288713 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b\": container with ID starting with 837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b not found: ID does not exist" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288744 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b"} err="failed to get container status \"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b\": rpc error: code = NotFound desc = could not find container \"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b\": container with ID starting with 837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288767 4995 scope.go:117] "RemoveContainer" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.303116 4995 scope.go:117] "RemoveContainer" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.315721 4995 scope.go:117] "RemoveContainer" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.319797 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5166d9b5-534e-4426-8085-a1900c7bdafb" (UID: "5166d9b5-534e-4426-8085-a1900c7bdafb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.351714 4995 scope.go:117] "RemoveContainer" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.352037 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40\": container with ID starting with 2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40 not found: ID does not exist" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352070 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40"} err="failed to get container status \"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40\": rpc error: code = NotFound desc = could not find container \"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40\": container with ID starting with 2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352092 4995 scope.go:117] "RemoveContainer" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.352442 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda\": container with ID starting with 4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda not found: ID does not exist" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352465 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda"} err="failed to get container status \"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda\": rpc error: code = NotFound desc = could not find container \"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda\": container with ID starting with 4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352479 4995 scope.go:117] "RemoveContainer" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.352677 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292\": container with ID starting with 2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292 not found: ID does not exist" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352697 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292"} err="failed to get container status \"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292\": rpc error: code = NotFound desc = could not find container \"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292\": container with ID starting with 2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352713 4995 scope.go:117] "RemoveContainer" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.360260 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.367932 4995 scope.go:117] "RemoveContainer" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.368368 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad\": container with ID starting with ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad not found: ID does not exist" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.368394 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad"} err="failed to get container status \"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad\": rpc error: code = NotFound desc = could not find container \"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad\": container with ID starting with ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.368416 4995 scope.go:117] "RemoveContainer" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.380771 4995 scope.go:117] "RemoveContainer" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.399311 4995 scope.go:117] "RemoveContainer" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.411870 4995 scope.go:117] "RemoveContainer" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.412573 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429\": container with ID starting with 049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429 not found: ID does not exist" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.412598 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429"} err="failed to get container status \"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429\": rpc error: code = NotFound desc = could not find container \"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429\": container with ID starting with 049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.412620 4995 scope.go:117] "RemoveContainer" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.413080 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6\": container with ID starting with 0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6 not found: ID does not exist" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413134 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6"} err="failed to get container status \"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6\": rpc error: code = NotFound desc = could not find container \"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6\": container with ID starting with 0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413150 4995 scope.go:117] "RemoveContainer" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.413945 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c\": container with ID starting with 9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c not found: ID does not exist" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413965 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c"} err="failed to get container status \"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c\": rpc error: code = NotFound desc = could not find container \"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c\": container with ID starting with 9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413979 4995 scope.go:117] "RemoveContainer" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.476512 4995 scope.go:117] "RemoveContainer" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.492922 4995 scope.go:117] "RemoveContainer" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.506500 4995 scope.go:117] "RemoveContainer" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.506887 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba\": container with ID starting with 4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba not found: ID does not exist" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.506927 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba"} err="failed to get container status \"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba\": rpc error: code = NotFound desc = could not find container \"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba\": container with ID starting with 4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.506962 4995 scope.go:117] "RemoveContainer" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.507664 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706\": container with ID starting with e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706 not found: ID does not exist" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.507701 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706"} err="failed to get container status \"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706\": rpc error: code = NotFound desc = could not find container \"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706\": container with ID starting with e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.507726 4995 scope.go:117] "RemoveContainer" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.508242 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92\": container with ID starting with 132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92 not found: ID does not exist" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.508275 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92"} err="failed to get container status \"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92\": rpc error: code = NotFound desc = could not find container \"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92\": container with ID starting with 132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.537649 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.543607 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702169 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfnlj"] Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702455 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702486 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702506 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702517 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702536 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702549 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702564 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702575 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702590 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702601 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702641 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702654 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702670 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702680 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702696 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702706 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702721 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702733 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702748 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702758 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702770 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702779 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702795 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702804 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702819 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702830 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702973 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702991 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.703007 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.703019 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.703035 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.704190 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.706361 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.718988 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfnlj"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.866933 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-catalog-content\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.867377 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-utilities\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.867453 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xbv\" (UniqueName: \"kubernetes.io/projected/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-kube-api-access-n9xbv\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.968937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-utilities\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.968997 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xbv\" (UniqueName: \"kubernetes.io/projected/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-kube-api-access-n9xbv\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.969064 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-catalog-content\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.969739 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-catalog-content\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.969808 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-utilities\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.992061 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xbv\" (UniqueName: \"kubernetes.io/projected/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-kube-api-access-n9xbv\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.030785 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.201553 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" event={"ID":"d781053b-fcf3-44a7-812a-8af6c2c1ab07","Type":"ContainerStarted","Data":"00e0d2c13cb1c5db6d1970ab2569adf6dcc5fce78b5bad46984c10e13eeaf28d"} Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.201923 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.201936 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" event={"ID":"d781053b-fcf3-44a7-812a-8af6c2c1ab07","Type":"ContainerStarted","Data":"758bebf8d8a6cf3e6042b3f391e73a48f38cb9538f65d0792c2280e04765f12b"} Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.204698 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.218847 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" podStartSLOduration=2.218819821 podStartE2EDuration="2.218819821s" podCreationTimestamp="2026-01-26 23:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:13:24.214968885 +0000 UTC m=+308.379676350" watchObservedRunningTime="2026-01-26 23:13:24.218819821 +0000 UTC m=+308.383527296" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.461587 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfnlj"] Jan 26 23:13:24 crc kubenswrapper[4995]: W0126 23:13:24.473497 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf956bbfb_557b_4b78_b2eb_141bdd1ca81f.slice/crio-700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4 WatchSource:0}: Error finding container 700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4: Status 404 returned error can't find the container with id 700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4 Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.523289 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" path="/var/lib/kubelet/pods/38be674d-6ae2-441d-b361-a9eea3b694a7/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.523917 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" path="/var/lib/kubelet/pods/3f9a7b30-dccb-4753-81a1-622853d6ba3c/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.524413 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" path="/var/lib/kubelet/pods/5166d9b5-534e-4426-8085-a1900c7bdafb/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.525498 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" path="/var/lib/kubelet/pods/58513b5e-460e-4344-91e3-1d20e26fd533/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.526072 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" path="/var/lib/kubelet/pods/b7295e1f-e3cb-4710-8763-b02b3e9ed67b/volumes" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.215281 4995 generic.go:334] "Generic (PLEG): container finished" podID="f956bbfb-557b-4b78-b2eb-141bdd1ca81f" containerID="0bc0c1c748e963f659145c02e76ffb66acc022e851af0ac12bd0e010bad5980c" exitCode=0 Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.215395 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerDied","Data":"0bc0c1c748e963f659145c02e76ffb66acc022e851af0ac12bd0e010bad5980c"} Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.215454 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerStarted","Data":"700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4"} Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.503814 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-56ct7"] Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.513053 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.516903 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.524134 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56ct7"] Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.588614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vhr6\" (UniqueName: \"kubernetes.io/projected/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-kube-api-access-5vhr6\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.588836 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-catalog-content\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.588949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-utilities\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.690540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-catalog-content\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.690589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-utilities\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.690647 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vhr6\" (UniqueName: \"kubernetes.io/projected/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-kube-api-access-5vhr6\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.691319 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-utilities\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.691320 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-catalog-content\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.727740 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vhr6\" (UniqueName: \"kubernetes.io/projected/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-kube-api-access-5vhr6\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.841876 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.110653 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fw5x"] Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.111768 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.113482 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.122052 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fw5x"] Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.208353 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-utilities\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.208686 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkmj\" (UniqueName: \"kubernetes.io/projected/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-kube-api-access-hmkmj\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.208709 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-catalog-content\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.222573 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerStarted","Data":"869cc4a79b2582359f95828b43e2010f744e718dd65565aa853c6babb96088d9"} Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.239601 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56ct7"] Jan 26 23:13:26 crc kubenswrapper[4995]: W0126 23:13:26.242282 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af9b1ce_9df1_4d94_ae24_e8ff6cd5edb8.slice/crio-e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e WatchSource:0}: Error finding container e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e: Status 404 returned error can't find the container with id e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.310173 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-utilities\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.310237 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmkmj\" (UniqueName: \"kubernetes.io/projected/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-kube-api-access-hmkmj\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.310260 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-catalog-content\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.311193 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-utilities\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.311641 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-catalog-content\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.334215 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmkmj\" (UniqueName: \"kubernetes.io/projected/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-kube-api-access-hmkmj\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.430779 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.810360 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fw5x"] Jan 26 23:13:26 crc kubenswrapper[4995]: W0126 23:13:26.817126 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269f6fbd_326f_45d1_a1a6_ea5da5b7daff.slice/crio-728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78 WatchSource:0}: Error finding container 728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78: Status 404 returned error can't find the container with id 728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.229274 4995 generic.go:334] "Generic (PLEG): container finished" podID="f956bbfb-557b-4b78-b2eb-141bdd1ca81f" containerID="869cc4a79b2582359f95828b43e2010f744e718dd65565aa853c6babb96088d9" exitCode=0 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.229329 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerDied","Data":"869cc4a79b2582359f95828b43e2010f744e718dd65565aa853c6babb96088d9"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.230775 4995 generic.go:334] "Generic (PLEG): container finished" podID="7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8" containerID="6d1bc306abbb56bdfd3f785e9c32058825128c9ed36c3a755d3e9aa98945632a" exitCode=0 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.230854 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerDied","Data":"6d1bc306abbb56bdfd3f785e9c32058825128c9ed36c3a755d3e9aa98945632a"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.230884 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerStarted","Data":"e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.233384 4995 generic.go:334] "Generic (PLEG): container finished" podID="269f6fbd-326f-45d1-a1a6-ea5da5b7daff" containerID="1dd8deea502b9435640c1b2d36aa01a07924105f54896af6152cc73b04c0fc94" exitCode=0 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.233409 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerDied","Data":"1dd8deea502b9435640c1b2d36aa01a07924105f54896af6152cc73b04c0fc94"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.233429 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerStarted","Data":"728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.905055 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6tk5"] Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.906775 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.910602 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.914283 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6tk5"] Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.031284 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-utilities\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.031690 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-catalog-content\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.031747 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9kc\" (UniqueName: \"kubernetes.io/projected/0d1ac969-80ec-4450-9f6d-0cca599d2185-kube-api-access-6h9kc\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.133439 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9kc\" (UniqueName: \"kubernetes.io/projected/0d1ac969-80ec-4450-9f6d-0cca599d2185-kube-api-access-6h9kc\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.133530 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-utilities\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.133561 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-catalog-content\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.134039 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-utilities\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.134064 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-catalog-content\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.153188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9kc\" (UniqueName: \"kubernetes.io/projected/0d1ac969-80ec-4450-9f6d-0cca599d2185-kube-api-access-6h9kc\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.239912 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerStarted","Data":"1dba11e769a6270dac1d4d9c5a1002367207e3649a3decbe003296316a627578"} Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.242414 4995 generic.go:334] "Generic (PLEG): container finished" podID="7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8" containerID="d0a13caed867f469c2c5df040207299f53b719babd95d0b957e428ed8605f349" exitCode=0 Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.242503 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerDied","Data":"d0a13caed867f469c2c5df040207299f53b719babd95d0b957e428ed8605f349"} Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.249697 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerStarted","Data":"4c4d4ab22b25ca459dc854a7eab9fe7da37ae97131a1ff91022c2c0a4d09cecd"} Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.264082 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wfnlj" podStartSLOduration=2.646016178 podStartE2EDuration="5.264066804s" podCreationTimestamp="2026-01-26 23:13:23 +0000 UTC" firstStartedPulling="2026-01-26 23:13:25.219364706 +0000 UTC m=+309.384072211" lastFinishedPulling="2026-01-26 23:13:27.837415382 +0000 UTC m=+312.002122837" observedRunningTime="2026-01-26 23:13:28.2581474 +0000 UTC m=+312.422854875" watchObservedRunningTime="2026-01-26 23:13:28.264066804 +0000 UTC m=+312.428774269" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.264777 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.675740 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6tk5"] Jan 26 23:13:28 crc kubenswrapper[4995]: W0126 23:13:28.684411 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d1ac969_80ec_4450_9f6d_0cca599d2185.slice/crio-0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05 WatchSource:0}: Error finding container 0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05: Status 404 returned error can't find the container with id 0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05 Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.258251 4995 generic.go:334] "Generic (PLEG): container finished" podID="269f6fbd-326f-45d1-a1a6-ea5da5b7daff" containerID="4c4d4ab22b25ca459dc854a7eab9fe7da37ae97131a1ff91022c2c0a4d09cecd" exitCode=0 Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.258361 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerDied","Data":"4c4d4ab22b25ca459dc854a7eab9fe7da37ae97131a1ff91022c2c0a4d09cecd"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.260718 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d1ac969-80ec-4450-9f6d-0cca599d2185" containerID="15974069ca0e1129e5f854388495c052b9da6ee80619cbe10d1f0f69a0499ab8" exitCode=0 Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.260822 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerDied","Data":"15974069ca0e1129e5f854388495c052b9da6ee80619cbe10d1f0f69a0499ab8"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.260855 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerStarted","Data":"0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.264124 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerStarted","Data":"aa61d4da104d762f1659cd2b569847ec6d832c90898dea7a7290f1d3ff663073"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.300425 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-56ct7" podStartSLOduration=2.643693974 podStartE2EDuration="4.300405221s" podCreationTimestamp="2026-01-26 23:13:25 +0000 UTC" firstStartedPulling="2026-01-26 23:13:27.231873532 +0000 UTC m=+311.396580997" lastFinishedPulling="2026-01-26 23:13:28.888584789 +0000 UTC m=+313.053292244" observedRunningTime="2026-01-26 23:13:29.299923318 +0000 UTC m=+313.464630793" watchObservedRunningTime="2026-01-26 23:13:29.300405221 +0000 UTC m=+313.465112686" Jan 26 23:13:30 crc kubenswrapper[4995]: I0126 23:13:30.271187 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerStarted","Data":"6c2f8034351a807d7124964536fc47b671dfe729e217b054284202b6310a60f4"} Jan 26 23:13:30 crc kubenswrapper[4995]: I0126 23:13:30.275400 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerStarted","Data":"e278f54b8414d0bcbf7e0030eb0f4b540676e7142ab41c203e4f3d401df653d3"} Jan 26 23:13:30 crc kubenswrapper[4995]: I0126 23:13:30.312986 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fw5x" podStartSLOduration=1.925396978 podStartE2EDuration="4.312965128s" podCreationTimestamp="2026-01-26 23:13:26 +0000 UTC" firstStartedPulling="2026-01-26 23:13:27.23506578 +0000 UTC m=+311.399773265" lastFinishedPulling="2026-01-26 23:13:29.62263395 +0000 UTC m=+313.787341415" observedRunningTime="2026-01-26 23:13:30.30799331 +0000 UTC m=+314.472700775" watchObservedRunningTime="2026-01-26 23:13:30.312965128 +0000 UTC m=+314.477672593" Jan 26 23:13:31 crc kubenswrapper[4995]: I0126 23:13:31.282550 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d1ac969-80ec-4450-9f6d-0cca599d2185" containerID="6c2f8034351a807d7124964536fc47b671dfe729e217b054284202b6310a60f4" exitCode=0 Jan 26 23:13:31 crc kubenswrapper[4995]: I0126 23:13:31.283840 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerDied","Data":"6c2f8034351a807d7124964536fc47b671dfe729e217b054284202b6310a60f4"} Jan 26 23:13:33 crc kubenswrapper[4995]: I0126 23:13:33.296883 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerStarted","Data":"40458dc2242e78866fdf834bbc6ffea6d129bd8e9c66e43c21f285307c140255"} Jan 26 23:13:33 crc kubenswrapper[4995]: I0126 23:13:33.318446 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6tk5" podStartSLOduration=3.892994101 podStartE2EDuration="6.318429029s" podCreationTimestamp="2026-01-26 23:13:27 +0000 UTC" firstStartedPulling="2026-01-26 23:13:29.262166361 +0000 UTC m=+313.426873826" lastFinishedPulling="2026-01-26 23:13:31.687601259 +0000 UTC m=+315.852308754" observedRunningTime="2026-01-26 23:13:33.315004354 +0000 UTC m=+317.479711859" watchObservedRunningTime="2026-01-26 23:13:33.318429029 +0000 UTC m=+317.483136494" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.031561 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.031844 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.079951 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.139465 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.200147 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.347433 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:35 crc kubenswrapper[4995]: I0126 23:13:35.842960 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:35 crc kubenswrapper[4995]: I0126 23:13:35.843382 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:35 crc kubenswrapper[4995]: I0126 23:13:35.901663 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.357175 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.431838 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.431883 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.479926 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:37 crc kubenswrapper[4995]: I0126 23:13:37.357650 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.265763 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.265804 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.306419 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.362246 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.249716 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" containerID="cri-o://5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662" gracePeriod=30 Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.466900 4995 generic.go:334] "Generic (PLEG): container finished" podID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerID="5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662" exitCode=0 Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.467041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerDied","Data":"5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662"} Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.672722 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.834677 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.834748 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.834827 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835142 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835236 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835291 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835334 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835405 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.836618 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.836767 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.843826 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.844318 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.844590 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.847666 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l" (OuterVolumeSpecName: "kube-api-access-n7f2l") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "kube-api-access-n7f2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.850094 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.870755 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937440 4995 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937491 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937504 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937518 4995 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937529 4995 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937542 4995 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937551 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.475591 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerDied","Data":"c0781d7b5c2499fcb553527a8fd295fe436cb8680c543a89922297ff4d9b554f"} Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.475678 4995 scope.go:117] "RemoveContainer" containerID="5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662" Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.475757 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.538186 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.538246 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:14:02 crc kubenswrapper[4995]: I0126 23:14:02.528901 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" path="/var/lib/kubelet/pods/c5507dd1-0894-4d9b-982d-817ebbb0092d/volumes" Jan 26 23:14:10 crc kubenswrapper[4995]: I0126 23:14:10.893990 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:14:10 crc kubenswrapper[4995]: I0126 23:14:10.894734 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:14:40 crc kubenswrapper[4995]: I0126 23:14:40.894391 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:14:40 crc kubenswrapper[4995]: I0126 23:14:40.895093 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.163746 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf"] Jan 26 23:15:00 crc kubenswrapper[4995]: E0126 23:15:00.164518 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.164536 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.164670 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.165213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.167354 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.167510 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.173332 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf"] Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.332985 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.333083 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.333146 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.434387 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.434476 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.434556 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.435336 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.442766 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.451326 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.483353 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.672789 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf"] Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.838069 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerStarted","Data":"1a29ecbf7c1dc8a0a44da58998a1ee9726769c1c2e698fc6c995631738b17836"} Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.838123 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerStarted","Data":"a3256bdf9b257aeb9d374e3b0ea9e090ff619c32c4090159693dd0fc5ce813ff"} Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.853144 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" podStartSLOduration=0.85313057 podStartE2EDuration="853.13057ms" podCreationTimestamp="2026-01-26 23:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:15:00.850624828 +0000 UTC m=+405.015332303" watchObservedRunningTime="2026-01-26 23:15:00.85313057 +0000 UTC m=+405.017838035" Jan 26 23:15:01 crc kubenswrapper[4995]: I0126 23:15:01.846661 4995 generic.go:334] "Generic (PLEG): container finished" podID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerID="1a29ecbf7c1dc8a0a44da58998a1ee9726769c1c2e698fc6c995631738b17836" exitCode=0 Jan 26 23:15:01 crc kubenswrapper[4995]: I0126 23:15:01.847192 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerDied","Data":"1a29ecbf7c1dc8a0a44da58998a1ee9726769c1c2e698fc6c995631738b17836"} Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.031124 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.175785 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.175938 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.175972 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.177016 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume" (OuterVolumeSpecName: "config-volume") pod "eedc9650-dfb6-4f85-854a-c4f87310cdc9" (UID: "eedc9650-dfb6-4f85-854a-c4f87310cdc9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.183969 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eedc9650-dfb6-4f85-854a-c4f87310cdc9" (UID: "eedc9650-dfb6-4f85-854a-c4f87310cdc9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.186457 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq" (OuterVolumeSpecName: "kube-api-access-t6kgq") pod "eedc9650-dfb6-4f85-854a-c4f87310cdc9" (UID: "eedc9650-dfb6-4f85-854a-c4f87310cdc9"). InnerVolumeSpecName "kube-api-access-t6kgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.278011 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") on node \"crc\" DevicePath \"\"" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.278063 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.278076 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.860135 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerDied","Data":"a3256bdf9b257aeb9d374e3b0ea9e090ff619c32c4090159693dd0fc5ce813ff"} Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.860202 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3256bdf9b257aeb9d374e3b0ea9e090ff619c32c4090159693dd0fc5ce813ff" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.860243 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.893338 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.893737 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.893794 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.894531 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.894665 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29" gracePeriod=600 Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.916246 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29" exitCode=0 Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.916326 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29"} Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.917852 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab"} Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.917921 4995 scope.go:117] "RemoveContainer" containerID="3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c" Jan 26 23:17:40 crc kubenswrapper[4995]: I0126 23:17:40.893866 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:17:40 crc kubenswrapper[4995]: I0126 23:17:40.894746 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:18:10 crc kubenswrapper[4995]: I0126 23:18:10.893773 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:18:10 crc kubenswrapper[4995]: I0126 23:18:10.894680 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.893369 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.893994 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.894058 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.894678 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.894749 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab" gracePeriod=600 Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277532 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab" exitCode=0 Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab"} Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277880 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46"} Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277902 4995 scope.go:117] "RemoveContainer" containerID="91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.640617 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh"] Jan 26 23:18:50 crc kubenswrapper[4995]: E0126 23:18:50.641459 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerName="collect-profiles" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.641475 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerName="collect-profiles" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.641618 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerName="collect-profiles" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.642503 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.645833 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.656121 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh"] Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.745319 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.745421 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.745449 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846183 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846223 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846264 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846764 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846970 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.882205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.960618 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:51 crc kubenswrapper[4995]: I0126 23:18:51.235905 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh"] Jan 26 23:18:51 crc kubenswrapper[4995]: I0126 23:18:51.358344 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerStarted","Data":"8277d63f614db9ae56e3251d7d5e84985fd410093cab7a766b2f9a9f29668959"} Jan 26 23:18:52 crc kubenswrapper[4995]: I0126 23:18:52.365397 4995 generic.go:334] "Generic (PLEG): container finished" podID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerID="85eb0fa94d63160740827492f277358e3378c81106c204db8d7e074a29c14217" exitCode=0 Jan 26 23:18:52 crc kubenswrapper[4995]: I0126 23:18:52.365480 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"85eb0fa94d63160740827492f277358e3378c81106c204db8d7e074a29c14217"} Jan 26 23:18:52 crc kubenswrapper[4995]: I0126 23:18:52.366875 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:18:54 crc kubenswrapper[4995]: I0126 23:18:54.379387 4995 generic.go:334] "Generic (PLEG): container finished" podID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerID="c2a02a28d3b2dfbaeacf2a40582479f8bde3db6c4aafa53f23e9dc18038ba3c1" exitCode=0 Jan 26 23:18:54 crc kubenswrapper[4995]: I0126 23:18:54.379515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"c2a02a28d3b2dfbaeacf2a40582479f8bde3db6c4aafa53f23e9dc18038ba3c1"} Jan 26 23:18:55 crc kubenswrapper[4995]: I0126 23:18:55.392444 4995 generic.go:334] "Generic (PLEG): container finished" podID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerID="484233adc44574c1d5f7ad430fc69ecaaf1d958455f2abe50410f50997ca946c" exitCode=0 Jan 26 23:18:55 crc kubenswrapper[4995]: I0126 23:18:55.392511 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"484233adc44574c1d5f7ad430fc69ecaaf1d958455f2abe50410f50997ca946c"} Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.747770 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.848616 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.848707 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.848815 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.852384 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle" (OuterVolumeSpecName: "bundle") pod "388e02fc-e28d-4d4a-94ec-464eb7573a8d" (UID: "388e02fc-e28d-4d4a-94ec-464eb7573a8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.857930 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c" (OuterVolumeSpecName: "kube-api-access-xkz4c") pod "388e02fc-e28d-4d4a-94ec-464eb7573a8d" (UID: "388e02fc-e28d-4d4a-94ec-464eb7573a8d"). InnerVolumeSpecName "kube-api-access-xkz4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.887613 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util" (OuterVolumeSpecName: "util") pod "388e02fc-e28d-4d4a-94ec-464eb7573a8d" (UID: "388e02fc-e28d-4d4a-94ec-464eb7573a8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.949966 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.950016 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.950036 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") on node \"crc\" DevicePath \"\"" Jan 26 23:18:57 crc kubenswrapper[4995]: I0126 23:18:57.415205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"8277d63f614db9ae56e3251d7d5e84985fd410093cab7a766b2f9a9f29668959"} Jan 26 23:18:57 crc kubenswrapper[4995]: I0126 23:18:57.415705 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8277d63f614db9ae56e3251d7d5e84985fd410093cab7a766b2f9a9f29668959" Jan 26 23:18:57 crc kubenswrapper[4995]: I0126 23:18:57.415298 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.381683 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382726 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" containerID="cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382803 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" containerID="cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382875 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382901 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" containerID="cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382970 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" containerID="cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.383022 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" containerID="cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.383002 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" containerID="cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.484683 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" containerID="cri-o://e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.755963 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.763131 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-acl-logging/0.log" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.763798 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-controller/0.log" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.764260 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839579 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2d2rq"] Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839889 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="extract" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839914 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="extract" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839932 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="pull" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839943 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="pull" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839959 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839971 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839984 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="util" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840013 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="util" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840028 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840038 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840049 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840060 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840074 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840084 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840124 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840136 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840152 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840163 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840177 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840188 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840203 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840213 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840229 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840240 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840259 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kubecfg-setup" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840269 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kubecfg-setup" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840286 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840297 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840446 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840460 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="extract" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840476 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840487 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840503 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840517 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840530 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840542 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840556 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840567 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840582 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840717 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840729 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840880 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.844012 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.871755 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.871802 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.871864 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872009 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872087 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872299 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872458 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872483 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872510 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872507 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872535 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872550 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash" (OuterVolumeSpecName: "host-slash") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872563 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872584 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872606 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872638 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872631 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872660 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872685 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872713 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872727 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872745 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872761 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872783 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872805 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872934 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872955 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-config\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872977 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-env-overrides\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872992 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-bin\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873016 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-netns\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873014 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873035 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873052 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-script-lib\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873071 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpklb\" (UniqueName: \"kubernetes.io/projected/f03c0b25-4269-4418-9106-08802fbf9f1a-kube-api-access-gpklb\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873110 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873128 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-ovn\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873143 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-etc-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-var-lib-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873181 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873187 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-node-log\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873221 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log" (OuterVolumeSpecName: "node-log") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873240 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-systemd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873269 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-log-socket\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873293 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-systemd-units\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873316 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-netd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873340 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-kubelet\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873446 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873475 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873483 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f03c0b25-4269-4418-9106-08802fbf9f1a-ovn-node-metrics-cert\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873493 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket" (OuterVolumeSpecName: "log-socket") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873507 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-slash\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873512 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873538 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873558 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873592 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873630 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873659 4995 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873670 4995 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873680 4995 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873690 4995 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873700 4995 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873708 4995 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873716 4995 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873724 4995 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873732 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873740 4995 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873748 4995 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873756 4995 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873764 4995 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873772 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.885286 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.885360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z" (OuterVolumeSpecName: "kube-api-access-ngr8z") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "kube-api-access-ngr8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.897039 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975008 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-systemd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975149 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-log-socket\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975189 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-log-socket\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975140 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-systemd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975202 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-systemd-units\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-systemd-units\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-netd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975367 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-netd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975384 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f03c0b25-4269-4418-9106-08802fbf9f1a-ovn-node-metrics-cert\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975465 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-kubelet\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975496 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-slash\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975564 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-config\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975575 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-kubelet\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975601 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-env-overrides\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975603 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-slash\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975626 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-bin\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975654 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-bin\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-netns\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975682 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975721 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975749 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-script-lib\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpklb\" (UniqueName: \"kubernetes.io/projected/f03c0b25-4269-4418-9106-08802fbf9f1a-kube-api-access-gpklb\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975821 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-etc-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975842 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975862 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-ovn\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975889 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-var-lib-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975930 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-node-log\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975986 4995 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976004 4995 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976017 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976030 4995 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976041 4995 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976052 4995 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-node-log\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975752 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-netns\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975777 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976441 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-env-overrides\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976467 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-etc-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976489 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-ovn\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976501 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-var-lib-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976521 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-script-lib\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976726 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-config\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.980019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f03c0b25-4269-4418-9106-08802fbf9f1a-ovn-node-metrics-cert\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.997089 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpklb\" (UniqueName: \"kubernetes.io/projected/f03c0b25-4269-4418-9106-08802fbf9f1a-kube-api-access-gpklb\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.161337 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:02 crc kubenswrapper[4995]: W0126 23:19:02.196290 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03c0b25_4269_4418_9106_08802fbf9f1a.slice/crio-19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2 WatchSource:0}: Error finding container 19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2: Status 404 returned error can't find the container with id 19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.444821 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03c0b25-4269-4418-9106-08802fbf9f1a" containerID="cf472edd5a152f510eb6e5f79e8533795602599493a77c5746f934a3ea9233e1" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.444851 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerDied","Data":"cf472edd5a152f510eb6e5f79e8533795602599493a77c5746f934a3ea9233e1"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.445158 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.447633 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/1.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448292 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/0.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448359 4995 generic.go:334] "Generic (PLEG): container finished" podID="4ba70657-ea12-4a85-9ec3-c1423b5b6912" containerID="c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23" exitCode=2 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448482 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerDied","Data":"c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448593 4995 scope.go:117] "RemoveContainer" containerID="cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448990 4995 scope.go:117] "RemoveContainer" containerID="c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.449304 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hln88_openshift-multus(4ba70657-ea12-4a85-9ec3-c1423b5b6912)\"" pod="openshift-multus/multus-hln88" podUID="4ba70657-ea12-4a85-9ec3-c1423b5b6912" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.455735 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.459327 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-acl-logging/0.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.459961 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-controller/0.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460452 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460490 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460507 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460521 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460534 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460547 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460560 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" exitCode=143 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460574 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" exitCode=143 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460605 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460670 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460691 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460710 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460731 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460752 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460772 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460783 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460794 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460805 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460815 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460826 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460837 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460849 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460860 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460874 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460891 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460903 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460913 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460925 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460935 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460945 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460956 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460966 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460976 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460987 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461001 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461018 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461031 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461043 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461053 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461064 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461075 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461086 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461096 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461144 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461159 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461179 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461204 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461216 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461227 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461238 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461248 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461258 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461270 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461281 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461291 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461301 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461444 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.506698 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.556714 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.567290 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.579330 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.610940 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.630337 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.647197 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.667362 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.692951 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.743717 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.782076 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.814638 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.868063 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869145 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869194 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869218 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869424 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869444 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869472 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869657 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869676 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869704 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869890 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869909 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869936 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.870138 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870173 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870187 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.870442 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870462 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870491 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.870666 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870686 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870697 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.871977 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.871997 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872026 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.872337 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872385 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872418 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.872711 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872768 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872787 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873007 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873025 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873338 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873356 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873553 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873570 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873804 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873817 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874392 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874412 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874666 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874685 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.875548 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.875593 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.876859 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.876889 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877234 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877254 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877696 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877720 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878031 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878050 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878312 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878331 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878529 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878547 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878779 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878818 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884078 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884130 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884440 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884477 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885257 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885300 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885554 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885588 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885928 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885947 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886321 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886353 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886630 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886654 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887042 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887063 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887378 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887397 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887645 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887678 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887934 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887951 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888137 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888151 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888330 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888347 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888621 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888654 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.891761 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.891797 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.892403 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469217 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"07a55a4f1d8fa7240764895ffcdbc647325fe2e614a028960c1e25473169e72e"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469254 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"19d6f00caedca58ea3b184fa38255377d1758417479bbc5b588d479bb2ed5d42"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"a4c8ba38af5ae0c8e752248a3ef313c3f4a7c9f30f1d2d3fba8d51f9b23de9c4"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469278 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"34eefd48899ef598accbf5d9b83684486030f06f9132dc791c6b44c99f28bd1c"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469287 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"b71cde1cae14bf2eb6974116e3b69ab08ea76eede35f0107d983ae68d3d375a2"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469296 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"68666f537246ef7da037a59ed97650ccf9118ad001a96c6222c1f7bcaabd7221"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.472166 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/1.log" Jan 26 23:19:04 crc kubenswrapper[4995]: I0126 23:19:04.523185 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" path="/var/lib/kubelet/pods/be4486f1-6ac2-4655-aff8-634049c9aa6c/volumes" Jan 26 23:19:06 crc kubenswrapper[4995]: I0126 23:19:06.491127 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"e0392f1d2242398feacbe8979fcf410689a06392e803ff3247d88000f0700e9a"} Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.722154 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4"] Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.723078 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.724948 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.725225 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4fv56" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.726987 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.769350 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8sqb\" (UniqueName: \"kubernetes.io/projected/a1c71758-f818-4fd6-a985-4aa33488e96c-kube-api-access-s8sqb\") pod \"obo-prometheus-operator-68bc856cb9-zfmp4\" (UID: \"a1c71758-f818-4fd6-a985-4aa33488e96c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.846801 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r"] Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.847563 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.850342 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.850507 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nbphx" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.867047 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g"] Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.867821 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.870200 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.870295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8sqb\" (UniqueName: \"kubernetes.io/projected/a1c71758-f818-4fd6-a985-4aa33488e96c-kube-api-access-s8sqb\") pod \"obo-prometheus-operator-68bc856cb9-zfmp4\" (UID: \"a1c71758-f818-4fd6-a985-4aa33488e96c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.870362 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.902581 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8sqb\" (UniqueName: \"kubernetes.io/projected/a1c71758-f818-4fd6-a985-4aa33488e96c-kube-api-access-s8sqb\") pod \"obo-prometheus-operator-68bc856cb9-zfmp4\" (UID: \"a1c71758-f818-4fd6-a985-4aa33488e96c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972189 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972252 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972316 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972343 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.976633 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.984644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.035899 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.036025 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g4lwc"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.036876 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.039279 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6x8tq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.039495 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080820 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/549a554b-0ef6-4d8b-b2cf-4445474572d2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080868 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080931 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvw9\" (UniqueName: \"kubernetes.io/projected/549a554b-0ef6-4d8b-b2cf-4445474572d2-kube-api-access-ndvw9\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.084137 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.084154 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087076 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087138 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087157 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087193 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" podUID="a1c71758-f818-4fd6-a985-4aa33488e96c" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.164353 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.182088 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/549a554b-0ef6-4d8b-b2cf-4445474572d2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.182396 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvw9\" (UniqueName: \"kubernetes.io/projected/549a554b-0ef6-4d8b-b2cf-4445474572d2-kube-api-access-ndvw9\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.182824 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.186061 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/549a554b-0ef6-4d8b-b2cf-4445474572d2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186253 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186310 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186330 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186369 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" podUID="684ae2c3-240e-4b73-9aaa-391ad824f47d" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.199518 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvw9\" (UniqueName: \"kubernetes.io/projected/549a554b-0ef6-4d8b-b2cf-4445474572d2-kube-api-access-ndvw9\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201310 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201370 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201391 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201439 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" podUID="4f936e96-9a6c-4e10-97a1-ccbf7e8c14de" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.243820 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ngw26"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.244478 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.248511 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nbwrf" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.283656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxnv\" (UniqueName: \"kubernetes.io/projected/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-kube-api-access-5lxnv\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.283752 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.384919 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.384974 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxnv\" (UniqueName: \"kubernetes.io/projected/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-kube-api-access-5lxnv\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.385993 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.395689 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.402987 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxnv\" (UniqueName: \"kubernetes.io/projected/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-kube-api-access-5lxnv\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.417639 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.417824 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.417922 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.418043 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" podUID="549a554b-0ef6-4d8b-b2cf-4445474572d2" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.505066 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"9846f84591fb6d95e62aa5a937a988c9ee991c667a387c6ec778d409a3a0043f"} Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.505487 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.505537 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.539931 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" podStartSLOduration=7.5399179830000005 podStartE2EDuration="7.539917983s" podCreationTimestamp="2026-01-26 23:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:19:08.539766979 +0000 UTC m=+652.704474444" watchObservedRunningTime="2026-01-26 23:19:08.539917983 +0000 UTC m=+652.704625448" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.545776 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.563434 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583064 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583138 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583160 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583197 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" podUID="f8710ec9-2fc5-400b-83d0-0411f6e7fdc8" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.692444 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.692544 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.692847 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.696778 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.696897 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.697315 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.701260 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ngw26"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.709086 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.710548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.710913 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.716059 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g4lwc"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.716213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.716730 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752055 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752132 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752154 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752197 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" podUID="4f936e96-9a6c-4e10-97a1-ccbf7e8c14de" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760265 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760325 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760347 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760388 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" podUID="a1c71758-f818-4fd6-a985-4aa33488e96c" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781328 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781421 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781449 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781514 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" podUID="684ae2c3-240e-4b73-9aaa-391ad824f47d" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793475 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793517 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793538 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793576 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" podUID="549a554b-0ef6-4d8b-b2cf-4445474572d2" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.510707 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.511273 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.511794 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535418 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535503 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535530 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535610 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" podUID="f8710ec9-2fc5-400b-83d0-0411f6e7fdc8" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.556788 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:13 crc kubenswrapper[4995]: I0126 23:19:13.517116 4995 scope.go:117] "RemoveContainer" containerID="c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23" Jan 26 23:19:14 crc kubenswrapper[4995]: I0126 23:19:14.562581 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/1.log" Jan 26 23:19:14 crc kubenswrapper[4995]: I0126 23:19:14.562995 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"fa90d0287da0bbe24975f7263e98dcd40797e5854c53a5fc11a45864231005f7"} Jan 26 23:19:16 crc kubenswrapper[4995]: I0126 23:19:16.776222 4995 scope.go:117] "RemoveContainer" containerID="866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213" Jan 26 23:19:19 crc kubenswrapper[4995]: I0126 23:19:19.516756 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:19 crc kubenswrapper[4995]: I0126 23:19:19.517975 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:19 crc kubenswrapper[4995]: I0126 23:19:19.961716 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g"] Jan 26 23:19:20 crc kubenswrapper[4995]: I0126 23:19:20.610644 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" event={"ID":"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de","Type":"ContainerStarted","Data":"b316deca6816f1dfc397ed716c14619d5400dedf4a6f09333ebdc9a1bcdaeb57"} Jan 26 23:19:21 crc kubenswrapper[4995]: I0126 23:19:21.516548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:21 crc kubenswrapper[4995]: I0126 23:19:21.517251 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:22 crc kubenswrapper[4995]: I0126 23:19:22.010770 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4"] Jan 26 23:19:22 crc kubenswrapper[4995]: W0126 23:19:22.020410 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c71758_f818_4fd6_a985_4aa33488e96c.slice/crio-45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973 WatchSource:0}: Error finding container 45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973: Status 404 returned error can't find the container with id 45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973 Jan 26 23:19:22 crc kubenswrapper[4995]: I0126 23:19:22.624980 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" event={"ID":"a1c71758-f818-4fd6-a985-4aa33488e96c","Type":"ContainerStarted","Data":"45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973"} Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.517084 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.517466 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.517889 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.518595 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:24 crc kubenswrapper[4995]: I0126 23:19:24.516655 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:24 crc kubenswrapper[4995]: I0126 23:19:24.517636 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.071060 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ngw26"] Jan 26 23:19:25 crc kubenswrapper[4995]: W0126 23:19:25.080436 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8710ec9_2fc5_400b_83d0_0411f6e7fdc8.slice/crio-3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4 WatchSource:0}: Error finding container 3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4: Status 404 returned error can't find the container with id 3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4 Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.118035 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g4lwc"] Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.289846 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r"] Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.651731 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" event={"ID":"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de","Type":"ContainerStarted","Data":"a4bfe53113fddbce132dfdaf6928778be5968cb4d1607a135032d2f5a01ca7e8"} Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.653019 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" event={"ID":"549a554b-0ef6-4d8b-b2cf-4445474572d2","Type":"ContainerStarted","Data":"7e91df246ea222c267f0287e4f6f25205ba3a9238306a4a5d06f220ae1b5d453"} Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.654196 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" event={"ID":"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8","Type":"ContainerStarted","Data":"3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4"} Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.684401 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" podStartSLOduration=13.802845018 podStartE2EDuration="18.684376391s" podCreationTimestamp="2026-01-26 23:19:07 +0000 UTC" firstStartedPulling="2026-01-26 23:19:19.975864944 +0000 UTC m=+664.140572449" lastFinishedPulling="2026-01-26 23:19:24.857396357 +0000 UTC m=+669.022103822" observedRunningTime="2026-01-26 23:19:25.678072093 +0000 UTC m=+669.842779558" watchObservedRunningTime="2026-01-26 23:19:25.684376391 +0000 UTC m=+669.849083886" Jan 26 23:19:25 crc kubenswrapper[4995]: W0126 23:19:25.949709 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684ae2c3_240e_4b73_9aaa_391ad824f47d.slice/crio-15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d WatchSource:0}: Error finding container 15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d: Status 404 returned error can't find the container with id 15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.673434 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" event={"ID":"684ae2c3-240e-4b73-9aaa-391ad824f47d","Type":"ContainerStarted","Data":"2418ea308002b3729fdd946d7908ee6ca2946e91bf0feb8e68726653317797e0"} Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.673477 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" event={"ID":"684ae2c3-240e-4b73-9aaa-391ad824f47d","Type":"ContainerStarted","Data":"15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d"} Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.684578 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" event={"ID":"a1c71758-f818-4fd6-a985-4aa33488e96c","Type":"ContainerStarted","Data":"3daa3378178ea242c56404676343a5072503c94bf19589f95f11b05a99fa0d46"} Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.699946 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" podStartSLOduration=19.699923752 podStartE2EDuration="19.699923752s" podCreationTimestamp="2026-01-26 23:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:19:26.694702941 +0000 UTC m=+670.859410406" watchObservedRunningTime="2026-01-26 23:19:26.699923752 +0000 UTC m=+670.864631227" Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.718827 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" podStartSLOduration=15.712679721 podStartE2EDuration="19.718809406s" podCreationTimestamp="2026-01-26 23:19:07 +0000 UTC" firstStartedPulling="2026-01-26 23:19:22.031406458 +0000 UTC m=+666.196113923" lastFinishedPulling="2026-01-26 23:19:26.037536143 +0000 UTC m=+670.202243608" observedRunningTime="2026-01-26 23:19:26.71137779 +0000 UTC m=+670.876085255" watchObservedRunningTime="2026-01-26 23:19:26.718809406 +0000 UTC m=+670.883516871" Jan 26 23:19:28 crc kubenswrapper[4995]: I0126 23:19:28.697145 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" event={"ID":"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8","Type":"ContainerStarted","Data":"23946c88c7a1327f3b646682d6d7d8c31d7b6312e27929cb8855d516895b04b2"} Jan 26 23:19:28 crc kubenswrapper[4995]: I0126 23:19:28.697529 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:28 crc kubenswrapper[4995]: I0126 23:19:28.724912 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" podStartSLOduration=18.209376369 podStartE2EDuration="20.724897269s" podCreationTimestamp="2026-01-26 23:19:08 +0000 UTC" firstStartedPulling="2026-01-26 23:19:25.0824238 +0000 UTC m=+669.247131255" lastFinishedPulling="2026-01-26 23:19:27.59794469 +0000 UTC m=+671.762652155" observedRunningTime="2026-01-26 23:19:28.720397226 +0000 UTC m=+672.885104691" watchObservedRunningTime="2026-01-26 23:19:28.724897269 +0000 UTC m=+672.889604734" Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.724220 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" event={"ID":"549a554b-0ef6-4d8b-b2cf-4445474572d2","Type":"ContainerStarted","Data":"52ed7660fd32105bec83a0d06ff30ba4ff37bd33829aac8e732bc60ca2c4685e"} Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.726365 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.773031 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" podStartSLOduration=17.827378072 podStartE2EDuration="23.773011057s" podCreationTimestamp="2026-01-26 23:19:08 +0000 UTC" firstStartedPulling="2026-01-26 23:19:25.146867309 +0000 UTC m=+669.311574774" lastFinishedPulling="2026-01-26 23:19:31.092500284 +0000 UTC m=+675.257207759" observedRunningTime="2026-01-26 23:19:31.770878734 +0000 UTC m=+675.935586209" watchObservedRunningTime="2026-01-26 23:19:31.773011057 +0000 UTC m=+675.937718522" Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.787433 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:32 crc kubenswrapper[4995]: I0126 23:19:32.189198 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:38 crc kubenswrapper[4995]: I0126 23:19:38.567326 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.363403 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6"] Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.364757 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.366810 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.375062 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6"] Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.425746 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.426022 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.426152 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.527659 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.528144 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.528487 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.528746 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.529170 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.551361 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.682787 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.155461 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6"] Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.780250 4995 generic.go:334] "Generic (PLEG): container finished" podID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerID="3de19504031a85b67e608f56cd36e235a222a4ad8949a92aee0a3bfe9a3e9411" exitCode=0 Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.780412 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"3de19504031a85b67e608f56cd36e235a222a4ad8949a92aee0a3bfe9a3e9411"} Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.780583 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerStarted","Data":"5d8d8f6eaf76cdd240751561dafd4d23164ab64ffb884727a96f9b38147f8f99"} Jan 26 23:19:42 crc kubenswrapper[4995]: I0126 23:19:42.794938 4995 generic.go:334] "Generic (PLEG): container finished" podID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerID="f660fb4ddbb9aa17c2ee2fa2bf46bff0879a10ca4eaf436e01da1c898740a1f9" exitCode=0 Jan 26 23:19:42 crc kubenswrapper[4995]: I0126 23:19:42.795125 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"f660fb4ddbb9aa17c2ee2fa2bf46bff0879a10ca4eaf436e01da1c898740a1f9"} Jan 26 23:19:43 crc kubenswrapper[4995]: I0126 23:19:43.803045 4995 generic.go:334] "Generic (PLEG): container finished" podID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerID="877ea637719eb54168db9f6e40b163aaba2f1e7f8b850b115b754f53459be5ae" exitCode=0 Jan 26 23:19:43 crc kubenswrapper[4995]: I0126 23:19:43.803144 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"877ea637719eb54168db9f6e40b163aaba2f1e7f8b850b115b754f53459be5ae"} Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.092719 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197176 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"c72f27ba-28b4-41be-a2e3-894496ce06fb\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197428 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"c72f27ba-28b4-41be-a2e3-894496ce06fb\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197467 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"c72f27ba-28b4-41be-a2e3-894496ce06fb\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197954 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle" (OuterVolumeSpecName: "bundle") pod "c72f27ba-28b4-41be-a2e3-894496ce06fb" (UID: "c72f27ba-28b4-41be-a2e3-894496ce06fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.209322 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d" (OuterVolumeSpecName: "kube-api-access-tl58d") pod "c72f27ba-28b4-41be-a2e3-894496ce06fb" (UID: "c72f27ba-28b4-41be-a2e3-894496ce06fb"). InnerVolumeSpecName "kube-api-access-tl58d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.216867 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util" (OuterVolumeSpecName: "util") pod "c72f27ba-28b4-41be-a2e3-894496ce06fb" (UID: "c72f27ba-28b4-41be-a2e3-894496ce06fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.298661 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.298716 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.298736 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.831898 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"5d8d8f6eaf76cdd240751561dafd4d23164ab64ffb884727a96f9b38147f8f99"} Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.831940 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8d8f6eaf76cdd240751561dafd4d23164ab64ffb884727a96f9b38147f8f99" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.832002 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.028709 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnp66"] Jan 26 23:19:51 crc kubenswrapper[4995]: E0126 23:19:51.030039 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="util" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030080 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="util" Jan 26 23:19:51 crc kubenswrapper[4995]: E0126 23:19:51.030135 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="pull" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030145 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="pull" Jan 26 23:19:51 crc kubenswrapper[4995]: E0126 23:19:51.030162 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="extract" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030169 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="extract" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030426 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="extract" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.031321 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.036255 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mk8g8" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.036523 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.036592 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.050605 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnp66"] Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.177825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kmh9\" (UniqueName: \"kubernetes.io/projected/1f224cbd-cdf6-474c-bcc6-a37358dcd4f5-kube-api-access-2kmh9\") pod \"nmstate-operator-646758c888-fnp66\" (UID: \"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.279537 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kmh9\" (UniqueName: \"kubernetes.io/projected/1f224cbd-cdf6-474c-bcc6-a37358dcd4f5-kube-api-access-2kmh9\") pod \"nmstate-operator-646758c888-fnp66\" (UID: \"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.308460 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kmh9\" (UniqueName: \"kubernetes.io/projected/1f224cbd-cdf6-474c-bcc6-a37358dcd4f5-kube-api-access-2kmh9\") pod \"nmstate-operator-646758c888-fnp66\" (UID: \"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.353166 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.611591 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnp66"] Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.877572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" event={"ID":"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5","Type":"ContainerStarted","Data":"28787f53f246ec398a9fd01fe2a23032a814cfa5c0fe212950d47683f130f68c"} Jan 26 23:19:54 crc kubenswrapper[4995]: I0126 23:19:54.899225 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" event={"ID":"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5","Type":"ContainerStarted","Data":"14ee2e36a2349337d284da497f7a139f7893006d0f993d02cf103a07010b2374"} Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.280126 4995 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.536068 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" podStartSLOduration=6.847795236 podStartE2EDuration="9.536047574s" podCreationTimestamp="2026-01-26 23:19:51 +0000 UTC" firstStartedPulling="2026-01-26 23:19:51.629311538 +0000 UTC m=+695.794019013" lastFinishedPulling="2026-01-26 23:19:54.317563846 +0000 UTC m=+698.482271351" observedRunningTime="2026-01-26 23:19:54.937879219 +0000 UTC m=+699.102586724" watchObservedRunningTime="2026-01-26 23:20:00.536047574 +0000 UTC m=+704.700755029" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.539783 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-75scl"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.546467 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.547185 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.550302 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.553630 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ctbps" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.553943 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.556704 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.560535 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-75scl"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.614283 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgqg\" (UniqueName: \"kubernetes.io/projected/4adb027e-2869-4cbc-bdb7-63ae41659c28-kube-api-access-shgqg\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.614649 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7lgc\" (UniqueName: \"kubernetes.io/projected/49297381-c6bb-4ede-9f80-38ee237f7a3e-kube-api-access-p7lgc\") pod \"nmstate-metrics-54757c584b-75scl\" (UID: \"49297381-c6bb-4ede-9f80-38ee237f7a3e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.614700 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.622298 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4nqd8"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.622959 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.692874 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.693529 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.695467 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.695698 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.695823 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nfqk5" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.705126 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.715708 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.715922 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-ovs-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.715990 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-nmstate-lock\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716089 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4zd\" (UniqueName: \"kubernetes.io/projected/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-kube-api-access-fv4zd\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shgqg\" (UniqueName: \"kubernetes.io/projected/4adb027e-2869-4cbc-bdb7-63ae41659c28-kube-api-access-shgqg\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716294 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-dbus-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716363 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7lgc\" (UniqueName: \"kubernetes.io/projected/49297381-c6bb-4ede-9f80-38ee237f7a3e-kube-api-access-p7lgc\") pod \"nmstate-metrics-54757c584b-75scl\" (UID: \"49297381-c6bb-4ede-9f80-38ee237f7a3e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: E0126 23:20:00.715875 4995 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 23:20:00 crc kubenswrapper[4995]: E0126 23:20:00.716594 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair podName:4adb027e-2869-4cbc-bdb7-63ae41659c28 nodeName:}" failed. No retries permitted until 2026-01-26 23:20:01.216555219 +0000 UTC m=+705.381262684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jkj8f" (UID: "4adb027e-2869-4cbc-bdb7-63ae41659c28") : secret "openshift-nmstate-webhook" not found Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.733564 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7lgc\" (UniqueName: \"kubernetes.io/projected/49297381-c6bb-4ede-9f80-38ee237f7a3e-kube-api-access-p7lgc\") pod \"nmstate-metrics-54757c584b-75scl\" (UID: \"49297381-c6bb-4ede-9f80-38ee237f7a3e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.733658 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shgqg\" (UniqueName: \"kubernetes.io/projected/4adb027e-2869-4cbc-bdb7-63ae41659c28-kube-api-access-shgqg\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817298 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-nmstate-lock\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817480 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfnw\" (UniqueName: \"kubernetes.io/projected/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-kube-api-access-6dfnw\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817578 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4zd\" (UniqueName: \"kubernetes.io/projected/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-kube-api-access-fv4zd\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817408 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-nmstate-lock\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817805 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-dbus-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817986 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-ovs-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.818069 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-ovs-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.818208 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-dbus-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.834929 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4zd\" (UniqueName: \"kubernetes.io/projected/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-kube-api-access-fv4zd\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.903712 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.918537 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfnw\" (UniqueName: \"kubernetes.io/projected/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-kube-api-access-6dfnw\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.918581 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.918598 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.919440 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.922573 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.934979 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.942054 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfnw\" (UniqueName: \"kubernetes.io/projected/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-kube-api-access-6dfnw\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.995596 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.002152 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.008873 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.024975 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122299 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122351 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122379 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122415 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122460 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122489 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122562 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.135336 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-75scl"] Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.140756 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49297381_c6bb_4ede_9f80_38ee237f7a3e.slice/crio-bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34 WatchSource:0}: Error finding container bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34: Status 404 returned error can't find the container with id bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34 Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223927 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223953 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223978 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224016 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224053 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224125 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224770 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d"] Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.226348 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.226591 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.227147 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.227188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.229246 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.229428 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.229848 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9c3198_27d3_4733_8c9c_ccc6f0168f0d.slice/crio-d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752 WatchSource:0}: Error finding container d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752: Status 404 returned error can't find the container with id d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752 Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.230324 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.242281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.324781 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.514708 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.726355 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.728302 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05869402_35d4_4054_845a_e45b6e9ed633.slice/crio-caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4 WatchSource:0}: Error finding container caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4: Status 404 returned error can't find the container with id caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4 Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.734332 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f"] Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.746929 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4adb027e_2869_4cbc_bdb7_63ae41659c28.slice/crio-c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e WatchSource:0}: Error finding container c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e: Status 404 returned error can't find the container with id c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.955217 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerStarted","Data":"caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.957988 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" event={"ID":"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d","Type":"ContainerStarted","Data":"d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.959479 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" event={"ID":"49297381-c6bb-4ede-9f80-38ee237f7a3e","Type":"ContainerStarted","Data":"bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.960788 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4nqd8" event={"ID":"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56","Type":"ContainerStarted","Data":"dfa7d601151afd6a7670af153f9e71fc238de824a9821b1aef6e095f9b2b0b0d"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.961764 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" event={"ID":"4adb027e-2869-4cbc-bdb7-63ae41659c28","Type":"ContainerStarted","Data":"c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e"} Jan 26 23:20:03 crc kubenswrapper[4995]: I0126 23:20:03.978013 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerStarted","Data":"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95"} Jan 26 23:20:03 crc kubenswrapper[4995]: I0126 23:20:03.999771 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-567f8c8d56-2j2x6" podStartSLOduration=3.999753772 podStartE2EDuration="3.999753772s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:20:03.997341592 +0000 UTC m=+708.162049067" watchObservedRunningTime="2026-01-26 23:20:03.999753772 +0000 UTC m=+708.164461237" Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.985243 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4nqd8" event={"ID":"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56","Type":"ContainerStarted","Data":"aae766fb357642ec2264a826a097d355e45d839f0e5c577cc0ed08f009d637ee"} Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.985605 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.987248 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" event={"ID":"4adb027e-2869-4cbc-bdb7-63ae41659c28","Type":"ContainerStarted","Data":"2cff811e4dfa4cc38dfa5cbeaa63f1f2cc63cc727c266e0d3042ff36631c4dee"} Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.987321 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.988968 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" event={"ID":"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d","Type":"ContainerStarted","Data":"9e4ea379d5cd920f701d2b94ef9dedc5a1589b0828a33b78d3ec2901830164f2"} Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.990251 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" event={"ID":"49297381-c6bb-4ede-9f80-38ee237f7a3e","Type":"ContainerStarted","Data":"f2d4fd9f8770438367fbe46ba91520e0b7936b237e3c17054982aa702abe2a3a"} Jan 26 23:20:05 crc kubenswrapper[4995]: I0126 23:20:05.002184 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4nqd8" podStartSLOduration=1.386037766 podStartE2EDuration="5.002167663s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:00.978390876 +0000 UTC m=+705.143098351" lastFinishedPulling="2026-01-26 23:20:04.594520773 +0000 UTC m=+708.759228248" observedRunningTime="2026-01-26 23:20:05.001475046 +0000 UTC m=+709.166182521" watchObservedRunningTime="2026-01-26 23:20:05.002167663 +0000 UTC m=+709.166875128" Jan 26 23:20:05 crc kubenswrapper[4995]: I0126 23:20:05.018314 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" podStartSLOduration=1.653426943 podStartE2EDuration="5.018268048s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:01.231901534 +0000 UTC m=+705.396608999" lastFinishedPulling="2026-01-26 23:20:04.596742599 +0000 UTC m=+708.761450104" observedRunningTime="2026-01-26 23:20:05.01717237 +0000 UTC m=+709.181879835" watchObservedRunningTime="2026-01-26 23:20:05.018268048 +0000 UTC m=+709.182975513" Jan 26 23:20:05 crc kubenswrapper[4995]: I0126 23:20:05.053861 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" podStartSLOduration=2.208569208 podStartE2EDuration="5.053837191s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:01.74925996 +0000 UTC m=+705.913967435" lastFinishedPulling="2026-01-26 23:20:04.594527913 +0000 UTC m=+708.759235418" observedRunningTime="2026-01-26 23:20:05.050375824 +0000 UTC m=+709.215083289" watchObservedRunningTime="2026-01-26 23:20:05.053837191 +0000 UTC m=+709.218544666" Jan 26 23:20:08 crc kubenswrapper[4995]: I0126 23:20:08.015215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" event={"ID":"49297381-c6bb-4ede-9f80-38ee237f7a3e","Type":"ContainerStarted","Data":"2c9356a49cde6bb0a7c112a24a2bfc45c00bd81169ec1d79dcfd484f120bc591"} Jan 26 23:20:10 crc kubenswrapper[4995]: I0126 23:20:10.975868 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.004142 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" podStartSLOduration=4.674345938 podStartE2EDuration="11.004091241s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:01.142777355 +0000 UTC m=+705.307484820" lastFinishedPulling="2026-01-26 23:20:07.472522618 +0000 UTC m=+711.637230123" observedRunningTime="2026-01-26 23:20:08.0398748 +0000 UTC m=+712.204582305" watchObservedRunningTime="2026-01-26 23:20:11.004091241 +0000 UTC m=+715.168798746" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.325649 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.325753 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.334030 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:12 crc kubenswrapper[4995]: I0126 23:20:12.053735 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:12 crc kubenswrapper[4995]: I0126 23:20:12.126780 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:20:21 crc kubenswrapper[4995]: I0126 23:20:21.522215 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.740317 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m"] Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.742409 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.744251 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.751256 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m"] Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.844980 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.845268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.845432 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946371 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946456 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946481 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946960 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.947077 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.966281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:36 crc kubenswrapper[4995]: I0126 23:20:36.064968 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:36 crc kubenswrapper[4995]: I0126 23:20:36.354213 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m"] Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.178960 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zt9nn" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" containerID="cri-o://4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" gracePeriod=15 Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.238843 4995 generic.go:334] "Generic (PLEG): container finished" podID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerID="4b1362bd825c081f5b994a1e689f02ccd29ba4f887dc007d7b96688f60cdfc9b" exitCode=0 Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.238910 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"4b1362bd825c081f5b994a1e689f02ccd29ba4f887dc007d7b96688f60cdfc9b"} Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.238953 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerStarted","Data":"4d3e8b9e3d5aefbf68934c0abcdc1540b05aa2219fd460a9f65757f103f5b9f6"} Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.997185 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zt9nn_e80b6b9d-3bfd-4315-8643-695c2101bddb/console/0.log" Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.997517 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076508 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076565 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076622 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076719 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076758 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076827 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076868 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077491 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca" (OuterVolumeSpecName: "service-ca") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077557 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077612 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config" (OuterVolumeSpecName: "console-config") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.083751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.083788 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr" (OuterVolumeSpecName: "kube-api-access-tt5qr") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "kube-api-access-tt5qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.085400 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.111221 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:20:38 crc kubenswrapper[4995]: E0126 23:20:38.112570 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.112606 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.113303 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.127545 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.127384 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178195 4995 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178239 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178252 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178263 4995 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178273 4995 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178285 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178297 4995 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246499 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zt9nn_e80b6b9d-3bfd-4315-8643-695c2101bddb/console/0.log" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246553 4995 generic.go:334] "Generic (PLEG): container finished" podID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" exitCode=2 Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerDied","Data":"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042"} Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246626 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerDied","Data":"f8da331ad5479ba2deada0b967ed7ea0fd7ef2bec4a402a501182d5512dc16e8"} Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246649 4995 scope.go:117] "RemoveContainer" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246708 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.267154 4995 scope.go:117] "RemoveContainer" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" Jan 26 23:20:38 crc kubenswrapper[4995]: E0126 23:20:38.267587 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042\": container with ID starting with 4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042 not found: ID does not exist" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.267635 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042"} err="failed to get container status \"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042\": rpc error: code = NotFound desc = could not find container \"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042\": container with ID starting with 4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042 not found: ID does not exist" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.279271 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.279341 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.279374 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.281831 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.285588 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381044 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381378 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381422 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381840 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.400445 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.457752 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.524594 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" path="/var/lib/kubelet/pods/e80b6b9d-3bfd-4315-8643-695c2101bddb/volumes" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.667899 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.254598 4995 generic.go:334] "Generic (PLEG): container finished" podID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerID="c32abd6d376b09e4aa0e4ed7e261fe97d4985391e608bb9887cb7657a7cec8bf" exitCode=0 Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.254654 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"c32abd6d376b09e4aa0e4ed7e261fe97d4985391e608bb9887cb7657a7cec8bf"} Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.256574 4995 generic.go:334] "Generic (PLEG): container finished" podID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" exitCode=0 Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.256619 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5"} Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.256657 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerStarted","Data":"466e4ff2680bff531e80450abec354d669e22023a880a1983570609fe3fd89c0"} Jan 26 23:20:40 crc kubenswrapper[4995]: I0126 23:20:40.266723 4995 generic.go:334] "Generic (PLEG): container finished" podID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerID="26d24ca8d6c2a866bc51af3ac1d29df06ef85fd35aca521cfa360d493e37a644" exitCode=0 Jan 26 23:20:40 crc kubenswrapper[4995]: I0126 23:20:40.266809 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"26d24ca8d6c2a866bc51af3ac1d29df06ef85fd35aca521cfa360d493e37a644"} Jan 26 23:20:40 crc kubenswrapper[4995]: I0126 23:20:40.269734 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerStarted","Data":"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583"} Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.281341 4995 generic.go:334] "Generic (PLEG): container finished" podID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" exitCode=0 Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.281411 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583"} Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.578882 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.750601 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"a59475a0-c56e-4d7d-a062-2a9b7188a601\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.751065 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"a59475a0-c56e-4d7d-a062-2a9b7188a601\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.751131 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"a59475a0-c56e-4d7d-a062-2a9b7188a601\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.752975 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle" (OuterVolumeSpecName: "bundle") pod "a59475a0-c56e-4d7d-a062-2a9b7188a601" (UID: "a59475a0-c56e-4d7d-a062-2a9b7188a601"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.759256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8" (OuterVolumeSpecName: "kube-api-access-6wfk8") pod "a59475a0-c56e-4d7d-a062-2a9b7188a601" (UID: "a59475a0-c56e-4d7d-a062-2a9b7188a601"). InnerVolumeSpecName "kube-api-access-6wfk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.780522 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util" (OuterVolumeSpecName: "util") pod "a59475a0-c56e-4d7d-a062-2a9b7188a601" (UID: "a59475a0-c56e-4d7d-a062-2a9b7188a601"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.853254 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.853305 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.853327 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.290812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"4d3e8b9e3d5aefbf68934c0abcdc1540b05aa2219fd460a9f65757f103f5b9f6"} Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.290852 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3e8b9e3d5aefbf68934c0abcdc1540b05aa2219fd460a9f65757f103f5b9f6" Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.290861 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.292858 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerStarted","Data":"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3"} Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.314432 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fr95r" podStartSLOduration=1.833928284 podStartE2EDuration="4.31441901s" podCreationTimestamp="2026-01-26 23:20:38 +0000 UTC" firstStartedPulling="2026-01-26 23:20:39.25787373 +0000 UTC m=+743.422581205" lastFinishedPulling="2026-01-26 23:20:41.738364456 +0000 UTC m=+745.903071931" observedRunningTime="2026-01-26 23:20:42.314159133 +0000 UTC m=+746.478866638" watchObservedRunningTime="2026-01-26 23:20:42.31441901 +0000 UTC m=+746.479126475" Jan 26 23:20:48 crc kubenswrapper[4995]: I0126 23:20:48.458797 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:48 crc kubenswrapper[4995]: I0126 23:20:48.459462 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:49 crc kubenswrapper[4995]: I0126 23:20:49.523737 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fr95r" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" probeResult="failure" output=< Jan 26 23:20:49 crc kubenswrapper[4995]: timeout: failed to connect service ":50051" within 1s Jan 26 23:20:49 crc kubenswrapper[4995]: > Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.200964 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z"] Jan 26 23:20:51 crc kubenswrapper[4995]: E0126 23:20:51.208988 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="extract" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209042 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="extract" Jan 26 23:20:51 crc kubenswrapper[4995]: E0126 23:20:51.209077 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="util" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209085 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="util" Jan 26 23:20:51 crc kubenswrapper[4995]: E0126 23:20:51.209124 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="pull" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209133 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="pull" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209411 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="extract" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.210203 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.237262 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.237644 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.238190 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.238364 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.238547 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8h6dp" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.245041 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z"] Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.382285 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-webhook-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.382547 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxw8\" (UniqueName: \"kubernetes.io/projected/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-kube-api-access-kdxw8\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.382578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-apiservice-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.451699 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9"] Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.452565 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.455058 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.458012 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tvm9l" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.458362 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.469219 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9"] Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486545 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-webhook-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486603 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpkd\" (UniqueName: \"kubernetes.io/projected/191e8757-940a-4e3e-a884-f5935f9f8201-kube-api-access-mbpkd\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486670 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-webhook-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486699 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxw8\" (UniqueName: \"kubernetes.io/projected/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-kube-api-access-kdxw8\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486731 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-apiservice-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486766 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-apiservice-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.492939 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-apiservice-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.493430 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-webhook-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.503060 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxw8\" (UniqueName: \"kubernetes.io/projected/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-kube-api-access-kdxw8\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.537958 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.587735 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-apiservice-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.587787 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-webhook-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.587813 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpkd\" (UniqueName: \"kubernetes.io/projected/191e8757-940a-4e3e-a884-f5935f9f8201-kube-api-access-mbpkd\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.591291 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-apiservice-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.592159 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-webhook-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.611082 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpkd\" (UniqueName: \"kubernetes.io/projected/191e8757-940a-4e3e-a884-f5935f9f8201-kube-api-access-mbpkd\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.760582 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z"] Jan 26 23:20:51 crc kubenswrapper[4995]: W0126 23:20:51.770147 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb70f3de5_9e6d_465f_b6c3_b9eb12eba2d9.slice/crio-7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b WatchSource:0}: Error finding container 7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b: Status 404 returned error can't find the container with id 7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.772543 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:52 crc kubenswrapper[4995]: I0126 23:20:52.027322 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9"] Jan 26 23:20:52 crc kubenswrapper[4995]: W0126 23:20:52.033160 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod191e8757_940a_4e3e_a884_f5935f9f8201.slice/crio-095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5 WatchSource:0}: Error finding container 095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5: Status 404 returned error can't find the container with id 095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5 Jan 26 23:20:52 crc kubenswrapper[4995]: I0126 23:20:52.365215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" event={"ID":"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9","Type":"ContainerStarted","Data":"7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b"} Jan 26 23:20:52 crc kubenswrapper[4995]: I0126 23:20:52.366410 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" event={"ID":"191e8757-940a-4e3e-a884-f5935f9f8201","Type":"ContainerStarted","Data":"095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5"} Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.397591 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" event={"ID":"191e8757-940a-4e3e-a884-f5935f9f8201","Type":"ContainerStarted","Data":"98b36fd5d8e05e25e9891d2baa14df8d47f0c89dea4c1d9da6e14119b1efab91"} Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.398347 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.398994 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" event={"ID":"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9","Type":"ContainerStarted","Data":"2afd929c9c4ae68acbafa81ff63b02088309bfe1a47b564f1cde8ada3ed1b29c"} Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.399746 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.426682 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" podStartSLOduration=1.997566972 podStartE2EDuration="6.426664567s" podCreationTimestamp="2026-01-26 23:20:51 +0000 UTC" firstStartedPulling="2026-01-26 23:20:52.036066585 +0000 UTC m=+756.200774050" lastFinishedPulling="2026-01-26 23:20:56.46516418 +0000 UTC m=+760.629871645" observedRunningTime="2026-01-26 23:20:57.423533347 +0000 UTC m=+761.588240812" watchObservedRunningTime="2026-01-26 23:20:57.426664567 +0000 UTC m=+761.591372032" Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.445389 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" podStartSLOduration=1.888058811 podStartE2EDuration="6.44537138s" podCreationTimestamp="2026-01-26 23:20:51 +0000 UTC" firstStartedPulling="2026-01-26 23:20:51.782035588 +0000 UTC m=+755.946743053" lastFinishedPulling="2026-01-26 23:20:56.339348157 +0000 UTC m=+760.504055622" observedRunningTime="2026-01-26 23:20:57.441618685 +0000 UTC m=+761.606326150" watchObservedRunningTime="2026-01-26 23:20:57.44537138 +0000 UTC m=+761.610078845" Jan 26 23:20:58 crc kubenswrapper[4995]: I0126 23:20:58.499490 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:58 crc kubenswrapper[4995]: I0126 23:20:58.539381 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:58 crc kubenswrapper[4995]: I0126 23:20:58.748407 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.415845 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fr95r" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" containerID="cri-o://d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" gracePeriod=2 Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.829441 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.934573 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.934631 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.934711 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.935633 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities" (OuterVolumeSpecName: "utilities") pod "c64724ab-40c4-4f05-a58b-a8ce4b5ece57" (UID: "c64724ab-40c4-4f05-a58b-a8ce4b5ece57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.941831 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj" (OuterVolumeSpecName: "kube-api-access-gnbcj") pod "c64724ab-40c4-4f05-a58b-a8ce4b5ece57" (UID: "c64724ab-40c4-4f05-a58b-a8ce4b5ece57"). InnerVolumeSpecName "kube-api-access-gnbcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.036065 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.036118 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.056930 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c64724ab-40c4-4f05-a58b-a8ce4b5ece57" (UID: "c64724ab-40c4-4f05-a58b-a8ce4b5ece57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.137881 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423364 4995 generic.go:334] "Generic (PLEG): container finished" podID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" exitCode=0 Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423420 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423416 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3"} Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"466e4ff2680bff531e80450abec354d669e22023a880a1983570609fe3fd89c0"} Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423837 4995 scope.go:117] "RemoveContainer" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.440295 4995 scope.go:117] "RemoveContainer" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.456203 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.461052 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.469302 4995 scope.go:117] "RemoveContainer" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.483967 4995 scope.go:117] "RemoveContainer" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" Jan 26 23:21:01 crc kubenswrapper[4995]: E0126 23:21:01.484596 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3\": container with ID starting with d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3 not found: ID does not exist" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.484665 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3"} err="failed to get container status \"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3\": rpc error: code = NotFound desc = could not find container \"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3\": container with ID starting with d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3 not found: ID does not exist" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.484711 4995 scope.go:117] "RemoveContainer" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" Jan 26 23:21:01 crc kubenswrapper[4995]: E0126 23:21:01.485257 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583\": container with ID starting with 9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583 not found: ID does not exist" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.485299 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583"} err="failed to get container status \"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583\": rpc error: code = NotFound desc = could not find container \"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583\": container with ID starting with 9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583 not found: ID does not exist" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.485331 4995 scope.go:117] "RemoveContainer" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" Jan 26 23:21:01 crc kubenswrapper[4995]: E0126 23:21:01.485832 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5\": container with ID starting with 7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5 not found: ID does not exist" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.485878 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5"} err="failed to get container status \"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5\": rpc error: code = NotFound desc = could not find container \"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5\": container with ID starting with 7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5 not found: ID does not exist" Jan 26 23:21:02 crc kubenswrapper[4995]: I0126 23:21:02.527879 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" path="/var/lib/kubelet/pods/c64724ab-40c4-4f05-a58b-a8ce4b5ece57/volumes" Jan 26 23:21:10 crc kubenswrapper[4995]: I0126 23:21:10.893965 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:21:10 crc kubenswrapper[4995]: I0126 23:21:10.894729 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:21:11 crc kubenswrapper[4995]: I0126 23:21:11.778379 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:21:31 crc kubenswrapper[4995]: I0126 23:21:31.541354 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440519 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9"] Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.440780 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440795 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.440819 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-utilities" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440827 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-utilities" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.440835 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-content" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440843 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-content" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440942 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.441345 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.442955 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-h7fjj" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.443335 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.450993 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lt5dg"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.453959 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.456418 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.456447 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.465340 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.536610 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jlkxq"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.537647 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539301 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539429 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539436 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539664 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sf9mf" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.542825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726qp\" (UniqueName: \"kubernetes.io/projected/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-kube-api-access-726qp\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.542864 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.549793 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hp8cv"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.552034 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.555915 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.583506 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hp8cv"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643814 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-cert\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643853 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-startup\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643913 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snck7\" (UniqueName: \"kubernetes.io/projected/11187758-87a2-4879-8421-5d9cdc4fd8bd-kube-api-access-snck7\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643991 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726qp\" (UniqueName: \"kubernetes.io/projected/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-kube-api-access-726qp\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644058 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644153 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644178 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644271 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-reloader\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4768de9d-be12-4b0b-9bd1-03f127a1a557-metallb-excludel2\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644324 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtth\" (UniqueName: \"kubernetes.io/projected/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-kube-api-access-zxtth\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644344 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-metrics-certs\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644359 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmz9\" (UniqueName: \"kubernetes.io/projected/4768de9d-be12-4b0b-9bd1-03f127a1a557-kube-api-access-pxmz9\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644389 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644427 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-conf\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644445 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644514 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-sockets\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.655972 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.660268 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726qp\" (UniqueName: \"kubernetes.io/projected/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-kube-api-access-726qp\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.745985 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746321 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-conf\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746344 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746374 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-sockets\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746397 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-cert\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746415 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-startup\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746440 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snck7\" (UniqueName: \"kubernetes.io/projected/11187758-87a2-4879-8421-5d9cdc4fd8bd-kube-api-access-snck7\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746461 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746466 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746483 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746526 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-reloader\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4768de9d-be12-4b0b-9bd1-03f127a1a557-metallb-excludel2\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746556 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtth\" (UniqueName: \"kubernetes.io/projected/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-kube-api-access-zxtth\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746571 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-metrics-certs\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746588 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmz9\" (UniqueName: \"kubernetes.io/projected/4768de9d-be12-4b0b-9bd1-03f127a1a557-kube-api-access-pxmz9\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746616 4995 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746639 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-conf\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746666 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs podName:fd8ee636-b6e8-4caf-bf47-8356cf3974a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:21:33.2466486 +0000 UTC m=+797.411356065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs") pod "controller-6968d8fdc4-hp8cv" (UID: "fd8ee636-b6e8-4caf-bf47-8356cf3974a5") : secret "controller-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746776 4995 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746820 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist podName:4768de9d-be12-4b0b-9bd1-03f127a1a557 nodeName:}" failed. No retries permitted until 2026-01-26 23:21:33.246803104 +0000 UTC m=+797.411510569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist") pod "speaker-jlkxq" (UID: "4768de9d-be12-4b0b-9bd1-03f127a1a557") : secret "metallb-memberlist" not found Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747115 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-reloader\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747156 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-sockets\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.747445 4995 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.747493 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs podName:11187758-87a2-4879-8421-5d9cdc4fd8bd nodeName:}" failed. No retries permitted until 2026-01-26 23:21:33.247481611 +0000 UTC m=+797.412189086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs") pod "frr-k8s-lt5dg" (UID: "11187758-87a2-4879-8421-5d9cdc4fd8bd") : secret "frr-k8s-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-startup\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747683 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4768de9d-be12-4b0b-9bd1-03f127a1a557-metallb-excludel2\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.749362 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.751758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-metrics-certs\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.760272 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-cert\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.764925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtth\" (UniqueName: \"kubernetes.io/projected/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-kube-api-access-zxtth\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.765947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snck7\" (UniqueName: \"kubernetes.io/projected/11187758-87a2-4879-8421-5d9cdc4fd8bd-kube-api-access-snck7\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.768855 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmz9\" (UniqueName: \"kubernetes.io/projected/4768de9d-be12-4b0b-9bd1-03f127a1a557-kube-api-access-pxmz9\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.798335 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.191190 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9"] Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.253646 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.254350 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.254425 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:33 crc kubenswrapper[4995]: E0126 23:21:33.254692 4995 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 23:21:33 crc kubenswrapper[4995]: E0126 23:21:33.254793 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist podName:4768de9d-be12-4b0b-9bd1-03f127a1a557 nodeName:}" failed. No retries permitted until 2026-01-26 23:21:34.254763492 +0000 UTC m=+798.419470997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist") pod "speaker-jlkxq" (UID: "4768de9d-be12-4b0b-9bd1-03f127a1a557") : secret "metallb-memberlist" not found Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.259085 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.259992 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.420707 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.467339 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.651060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"6d09303f45eea82f4dc7eee094e0db82b1bb5b23a501308eea0d9f41ad68522c"} Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.653885 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" event={"ID":"d71dd2bc-e8c9-4a37-9096-35a1f19333f8","Type":"ContainerStarted","Data":"248cdc3a4e2762b712ea55242aec0c2e031dda42c780e7ec609ace26fed35255"} Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.724250 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hp8cv"] Jan 26 23:21:33 crc kubenswrapper[4995]: W0126 23:21:33.730018 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd8ee636_b6e8_4caf_bf47_8356cf3974a5.slice/crio-2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8 WatchSource:0}: Error finding container 2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8: Status 404 returned error can't find the container with id 2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8 Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.275129 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.280202 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.352758 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:34 crc kubenswrapper[4995]: W0126 23:21:34.368837 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4768de9d_be12_4b0b_9bd1_03f127a1a557.slice/crio-af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7 WatchSource:0}: Error finding container af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7: Status 404 returned error can't find the container with id af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7 Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.666371 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hp8cv" event={"ID":"fd8ee636-b6e8-4caf-bf47-8356cf3974a5","Type":"ContainerStarted","Data":"628754c7d459e4437059941126e0dbbba3dd61c0d62972731dd306862564f1fe"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.666429 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hp8cv" event={"ID":"fd8ee636-b6e8-4caf-bf47-8356cf3974a5","Type":"ContainerStarted","Data":"6f1aef8e072fbdcb86c9f70a6275938598495efca734ea779b5601b59454b35f"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.666441 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hp8cv" event={"ID":"fd8ee636-b6e8-4caf-bf47-8356cf3974a5","Type":"ContainerStarted","Data":"2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.667241 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.669280 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jlkxq" event={"ID":"4768de9d-be12-4b0b-9bd1-03f127a1a557","Type":"ContainerStarted","Data":"ab27940cb234b8781beeeea3062a707b78313c17ebcc25eb27a6166659261441"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.669323 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jlkxq" event={"ID":"4768de9d-be12-4b0b-9bd1-03f127a1a557","Type":"ContainerStarted","Data":"af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.698193 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hp8cv" podStartSLOduration=2.69816624 podStartE2EDuration="2.69816624s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:21:34.688268919 +0000 UTC m=+798.852976394" watchObservedRunningTime="2026-01-26 23:21:34.69816624 +0000 UTC m=+798.862873705" Jan 26 23:21:35 crc kubenswrapper[4995]: I0126 23:21:35.692688 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jlkxq" event={"ID":"4768de9d-be12-4b0b-9bd1-03f127a1a557","Type":"ContainerStarted","Data":"689c97cf6faa9b2af984dfff69c4b1359663537d4113d60e2ce71bc9ad2e5e70"} Jan 26 23:21:35 crc kubenswrapper[4995]: I0126 23:21:35.716878 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jlkxq" podStartSLOduration=3.716857918 podStartE2EDuration="3.716857918s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:21:35.712765315 +0000 UTC m=+799.877472790" watchObservedRunningTime="2026-01-26 23:21:35.716857918 +0000 UTC m=+799.881565373" Jan 26 23:21:36 crc kubenswrapper[4995]: I0126 23:21:36.706766 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.736289 4995 generic.go:334] "Generic (PLEG): container finished" podID="11187758-87a2-4879-8421-5d9cdc4fd8bd" containerID="feab391947619bd0d9a3e71925298a7c291add25c95e7e12d054b0599bfa6837" exitCode=0 Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.736372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerDied","Data":"feab391947619bd0d9a3e71925298a7c291add25c95e7e12d054b0599bfa6837"} Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.740157 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" event={"ID":"d71dd2bc-e8c9-4a37-9096-35a1f19333f8","Type":"ContainerStarted","Data":"2f89c00262b70796eb9cb03c5a330ba1e94e2c61961cb995b32e97c6db6b1925"} Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.740300 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.785368 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" podStartSLOduration=1.548354907 podStartE2EDuration="8.785347066s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="2026-01-26 23:21:33.205642358 +0000 UTC m=+797.370349823" lastFinishedPulling="2026-01-26 23:21:40.442634517 +0000 UTC m=+804.607341982" observedRunningTime="2026-01-26 23:21:40.783999942 +0000 UTC m=+804.948707467" watchObservedRunningTime="2026-01-26 23:21:40.785347066 +0000 UTC m=+804.950054551" Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.893660 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.893735 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:21:41 crc kubenswrapper[4995]: I0126 23:21:41.751054 4995 generic.go:334] "Generic (PLEG): container finished" podID="11187758-87a2-4879-8421-5d9cdc4fd8bd" containerID="e2de9b23e67dc3791e4411a0d1d17c652ccf78e323a6381f1fe611e6be1880d9" exitCode=0 Jan 26 23:21:41 crc kubenswrapper[4995]: I0126 23:21:41.751198 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerDied","Data":"e2de9b23e67dc3791e4411a0d1d17c652ccf78e323a6381f1fe611e6be1880d9"} Jan 26 23:21:42 crc kubenswrapper[4995]: I0126 23:21:42.764385 4995 generic.go:334] "Generic (PLEG): container finished" podID="11187758-87a2-4879-8421-5d9cdc4fd8bd" containerID="086bc9d9d39cf4928d6979ce48066ffd786a42b1fde5d217a55a3708fcefb6ce" exitCode=0 Jan 26 23:21:42 crc kubenswrapper[4995]: I0126 23:21:42.764454 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerDied","Data":"086bc9d9d39cf4928d6979ce48066ffd786a42b1fde5d217a55a3708fcefb6ce"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.471208 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776592 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"4326cffa9ffdb13b8a22e91a81bab58e12df6df32e6be86bb959e23bdc5daf5a"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776637 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"cf41880edaa3f06d5b0f184600bd762db9b7dc85c86c6fc6ab701ba773608423"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776650 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"5ac09632dabf3319e20fd304617b7a931601e7696cbe87ebf95e49e185b1cf7c"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776662 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"855c410e2e10ec3f2d3970b09fce8fdbce9eef3c80a6a03d86784889697b689f"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776673 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"5637b1909725ba03f4d3d3420c6a75ad43bcb6a19fe53ddb4e6ff616c2a287a9"} Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.357993 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.787507 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"8cf5ac4b80885eea33a6e9bb209ce8c1443c374f1acf06c6ba0320c6203072d5"} Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.788221 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.811727 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lt5dg" podStartSLOduration=5.997229614 podStartE2EDuration="12.811710472s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="2026-01-26 23:21:33.599429281 +0000 UTC m=+797.764136786" lastFinishedPulling="2026-01-26 23:21:40.413910179 +0000 UTC m=+804.578617644" observedRunningTime="2026-01-26 23:21:44.811475987 +0000 UTC m=+808.976183472" watchObservedRunningTime="2026-01-26 23:21:44.811710472 +0000 UTC m=+808.976417937" Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.867399 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8"] Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.868807 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.877529 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.877600 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8"] Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.065033 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.065213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.065243 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166377 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166476 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166996 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.167143 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.188563 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.272381 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.514434 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8"] Jan 26 23:21:46 crc kubenswrapper[4995]: W0126 23:21:46.515061 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2fc70c8_babd_496e_8d1c_acd82bb98901.slice/crio-1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39 WatchSource:0}: Error finding container 1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39: Status 404 returned error can't find the container with id 1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39 Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.801130 4995 generic.go:334] "Generic (PLEG): container finished" podID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerID="b4ec2a215429da042d9b72336fc0b2946bffca22fe46351c2e9d2bd1313d641e" exitCode=0 Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.801199 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"b4ec2a215429da042d9b72336fc0b2946bffca22fe46351c2e9d2bd1313d641e"} Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.801229 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerStarted","Data":"1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39"} Jan 26 23:21:48 crc kubenswrapper[4995]: I0126 23:21:48.421367 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:48 crc kubenswrapper[4995]: I0126 23:21:48.530468 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:50 crc kubenswrapper[4995]: I0126 23:21:50.971694 4995 generic.go:334] "Generic (PLEG): container finished" podID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerID="d5342cfdcf66dca9b448cad05d76b2baca8cafb86cfa4cdba377ec0f2d5d6127" exitCode=0 Jan 26 23:21:50 crc kubenswrapper[4995]: I0126 23:21:50.971828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"d5342cfdcf66dca9b448cad05d76b2baca8cafb86cfa4cdba377ec0f2d5d6127"} Jan 26 23:21:51 crc kubenswrapper[4995]: I0126 23:21:51.985219 4995 generic.go:334] "Generic (PLEG): container finished" podID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerID="028ea523146713c5f53bec59ed2513db8b08261ea099e88966ffd7dee26b9fc6" exitCode=0 Jan 26 23:21:51 crc kubenswrapper[4995]: I0126 23:21:51.985300 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"028ea523146713c5f53bec59ed2513db8b08261ea099e88966ffd7dee26b9fc6"} Jan 26 23:21:52 crc kubenswrapper[4995]: I0126 23:21:52.803492 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.310163 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.424134 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.499843 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"a2fc70c8-babd-496e-8d1c-acd82bb98901\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.499897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"a2fc70c8-babd-496e-8d1c-acd82bb98901\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.499960 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"a2fc70c8-babd-496e-8d1c-acd82bb98901\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.501490 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle" (OuterVolumeSpecName: "bundle") pod "a2fc70c8-babd-496e-8d1c-acd82bb98901" (UID: "a2fc70c8-babd-496e-8d1c-acd82bb98901"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.506387 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr" (OuterVolumeSpecName: "kube-api-access-ts2kr") pod "a2fc70c8-babd-496e-8d1c-acd82bb98901" (UID: "a2fc70c8-babd-496e-8d1c-acd82bb98901"). InnerVolumeSpecName "kube-api-access-ts2kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.516848 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util" (OuterVolumeSpecName: "util") pod "a2fc70c8-babd-496e-8d1c-acd82bb98901" (UID: "a2fc70c8-babd-496e-8d1c-acd82bb98901"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.601378 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.601412 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.601422 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:54 crc kubenswrapper[4995]: I0126 23:21:54.000823 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39"} Jan 26 23:21:54 crc kubenswrapper[4995]: I0126 23:21:54.000899 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:54 crc kubenswrapper[4995]: I0126 23:21:54.000911 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.018405 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5"] Jan 26 23:22:00 crc kubenswrapper[4995]: E0126 23:22:00.019302 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="extract" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019323 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="extract" Jan 26 23:22:00 crc kubenswrapper[4995]: E0126 23:22:00.019350 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="pull" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019363 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="pull" Jan 26 23:22:00 crc kubenswrapper[4995]: E0126 23:22:00.019388 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="util" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019401 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="util" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019612 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="extract" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.020335 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.023616 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.024584 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-srptp" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.024797 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.042430 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5"] Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.189317 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e647402-f342-4296-a09b-512075e3d867-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.189396 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25djz\" (UniqueName: \"kubernetes.io/projected/6e647402-f342-4296-a09b-512075e3d867-kube-api-access-25djz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.290855 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e647402-f342-4296-a09b-512075e3d867-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.291403 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25djz\" (UniqueName: \"kubernetes.io/projected/6e647402-f342-4296-a09b-512075e3d867-kube-api-access-25djz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.291582 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e647402-f342-4296-a09b-512075e3d867-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.327486 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25djz\" (UniqueName: \"kubernetes.io/projected/6e647402-f342-4296-a09b-512075e3d867-kube-api-access-25djz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.365132 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.843523 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5"] Jan 26 23:22:01 crc kubenswrapper[4995]: I0126 23:22:01.078190 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" event={"ID":"6e647402-f342-4296-a09b-512075e3d867","Type":"ContainerStarted","Data":"ee9c0f5f7e15a6a13254242853fb979fca91eb29f8a979aec205009931daeac9"} Jan 26 23:22:09 crc kubenswrapper[4995]: I0126 23:22:09.137351 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" event={"ID":"6e647402-f342-4296-a09b-512075e3d867","Type":"ContainerStarted","Data":"f608b058288c4c97ca1f6c54a5135829294ba12fab1de8d9acfeaabbfe482882"} Jan 26 23:22:09 crc kubenswrapper[4995]: I0126 23:22:09.168429 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" podStartSLOduration=2.9798194430000002 podStartE2EDuration="10.168404087s" podCreationTimestamp="2026-01-26 23:21:59 +0000 UTC" firstStartedPulling="2026-01-26 23:22:00.848777106 +0000 UTC m=+825.013484581" lastFinishedPulling="2026-01-26 23:22:08.03736176 +0000 UTC m=+832.202069225" observedRunningTime="2026-01-26 23:22:09.164323853 +0000 UTC m=+833.329031358" watchObservedRunningTime="2026-01-26 23:22:09.168404087 +0000 UTC m=+833.333111592" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.893340 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.893412 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.893461 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.894046 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.894125 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46" gracePeriod=600 Jan 26 23:22:11 crc kubenswrapper[4995]: I0126 23:22:11.159190 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46" exitCode=0 Jan 26 23:22:11 crc kubenswrapper[4995]: I0126 23:22:11.159536 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46"} Jan 26 23:22:11 crc kubenswrapper[4995]: I0126 23:22:11.159735 4995 scope.go:117] "RemoveContainer" containerID="e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab" Jan 26 23:22:12 crc kubenswrapper[4995]: I0126 23:22:12.172579 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576"} Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.071199 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-g88s9"] Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.072242 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.074281 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.074456 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2p4ql" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.074857 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.083244 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-g88s9"] Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.170063 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4v2p\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-kube-api-access-n4v2p\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.170298 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.271705 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.272494 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4v2p\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-kube-api-access-n4v2p\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.301747 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.305288 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4v2p\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-kube-api-access-n4v2p\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.387157 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.874910 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-g88s9"] Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.099082 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4"] Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.100086 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.102794 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2snmb" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.132441 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4"] Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.183746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" event={"ID":"5cf25cae-f1af-44e4-a613-be45044cf998","Type":"ContainerStarted","Data":"5afc86536bd5c9cafd19909a39f93dccfd2bcef22237386b7d1c92dc5fe258ec"} Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.284336 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869rp\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-kube-api-access-869rp\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.284386 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.385371 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869rp\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-kube-api-access-869rp\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.385414 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.407577 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.415717 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869rp\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-kube-api-access-869rp\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.418515 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.700190 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4"] Jan 26 23:22:14 crc kubenswrapper[4995]: W0126 23:22:14.708875 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b23efd_9250_469e_8bce_4f31c05d1470.slice/crio-2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d WatchSource:0}: Error finding container 2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d: Status 404 returned error can't find the container with id 2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d Jan 26 23:22:15 crc kubenswrapper[4995]: I0126 23:22:15.196149 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" event={"ID":"10b23efd-9250-469e-8bce-4f31c05d1470","Type":"ContainerStarted","Data":"2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d"} Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.239593 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" event={"ID":"5cf25cae-f1af-44e4-a613-be45044cf998","Type":"ContainerStarted","Data":"24868142b6a95d8a94769cea5efaa3c1b006a9945f7752c5d692ca07ed3bb462"} Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.241200 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.242488 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" event={"ID":"10b23efd-9250-469e-8bce-4f31c05d1470","Type":"ContainerStarted","Data":"60894095c7fddc6411cc043ab51dc546613a7013bb3c3706d72cb841f6af4957"} Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.257183 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" podStartSLOduration=1.334659658 podStartE2EDuration="9.257169909s" podCreationTimestamp="2026-01-26 23:22:13 +0000 UTC" firstStartedPulling="2026-01-26 23:22:13.881992189 +0000 UTC m=+838.046699664" lastFinishedPulling="2026-01-26 23:22:21.80450244 +0000 UTC m=+845.969209915" observedRunningTime="2026-01-26 23:22:22.255243771 +0000 UTC m=+846.419951236" watchObservedRunningTime="2026-01-26 23:22:22.257169909 +0000 UTC m=+846.421877374" Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.275012 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" podStartSLOduration=1.180109136 podStartE2EDuration="8.274991413s" podCreationTimestamp="2026-01-26 23:22:14 +0000 UTC" firstStartedPulling="2026-01-26 23:22:14.710008233 +0000 UTC m=+838.874715698" lastFinishedPulling="2026-01-26 23:22:21.80489051 +0000 UTC m=+845.969597975" observedRunningTime="2026-01-26 23:22:22.269601059 +0000 UTC m=+846.434308514" watchObservedRunningTime="2026-01-26 23:22:22.274991413 +0000 UTC m=+846.439698878" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.190778 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4g78v"] Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.191527 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.202230 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4g78v"] Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.205879 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-ssvqt" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.313528 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kmrp\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-kube-api-access-9kmrp\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.313583 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-bound-sa-token\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.415179 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kmrp\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-kube-api-access-9kmrp\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.415255 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-bound-sa-token\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.454630 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-bound-sa-token\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.454720 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kmrp\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-kube-api-access-9kmrp\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.505346 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.758155 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4g78v"] Jan 26 23:22:23 crc kubenswrapper[4995]: W0126 23:22:23.765036 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea05f4b_1373_4e08_9d78_e214b84cdc79.slice/crio-ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9 WatchSource:0}: Error finding container ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9: Status 404 returned error can't find the container with id ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9 Jan 26 23:22:24 crc kubenswrapper[4995]: I0126 23:22:24.270325 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4g78v" event={"ID":"0ea05f4b-1373-4e08-9d78-e214b84cdc79","Type":"ContainerStarted","Data":"9bda28cc3e670ebe44d2aed7bf634fc7a1d21e4239dacc53931b047119321132"} Jan 26 23:22:24 crc kubenswrapper[4995]: I0126 23:22:24.270396 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4g78v" event={"ID":"0ea05f4b-1373-4e08-9d78-e214b84cdc79","Type":"ContainerStarted","Data":"ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9"} Jan 26 23:22:24 crc kubenswrapper[4995]: I0126 23:22:24.292854 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-4g78v" podStartSLOduration=1.292833219 podStartE2EDuration="1.292833219s" podCreationTimestamp="2026-01-26 23:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:22:24.28287115 +0000 UTC m=+848.447578615" watchObservedRunningTime="2026-01-26 23:22:24.292833219 +0000 UTC m=+848.457540684" Jan 26 23:22:28 crc kubenswrapper[4995]: I0126 23:22:28.391384 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.690103 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.692304 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.698388 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.698507 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-j75mr" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.698762 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.701706 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.831090 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"openstack-operator-index-fzjhg\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.932520 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"openstack-operator-index-fzjhg\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.952634 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"openstack-operator-index-fzjhg\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:32 crc kubenswrapper[4995]: I0126 23:22:32.067726 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:32 crc kubenswrapper[4995]: I0126 23:22:32.463456 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:33 crc kubenswrapper[4995]: I0126 23:22:33.342128 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerStarted","Data":"aeca02f391a8dd03ed2ab5137c9d680b8ac5c3e8670fba3b9525f1ab0df35c29"} Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.055776 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.354974 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerStarted","Data":"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be"} Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.355112 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fzjhg" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" containerID="cri-o://39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" gracePeriod=2 Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.369683 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fzjhg" podStartSLOduration=1.963383484 podStartE2EDuration="4.369664461s" podCreationTimestamp="2026-01-26 23:22:31 +0000 UTC" firstStartedPulling="2026-01-26 23:22:32.477442359 +0000 UTC m=+856.642149824" lastFinishedPulling="2026-01-26 23:22:34.883723326 +0000 UTC m=+859.048430801" observedRunningTime="2026-01-26 23:22:35.369199099 +0000 UTC m=+859.533906564" watchObservedRunningTime="2026-01-26 23:22:35.369664461 +0000 UTC m=+859.534371926" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.668413 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z9fdb"] Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.669428 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.684324 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z9fdb"] Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.757225 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.788726 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkx5q\" (UniqueName: \"kubernetes.io/projected/ca183057-4337-4dfb-a5ec-e8945fe74cca-kube-api-access-wkx5q\") pod \"openstack-operator-index-z9fdb\" (UID: \"ca183057-4337-4dfb-a5ec-e8945fe74cca\") " pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.890213 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.890467 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkx5q\" (UniqueName: \"kubernetes.io/projected/ca183057-4337-4dfb-a5ec-e8945fe74cca-kube-api-access-wkx5q\") pod \"openstack-operator-index-z9fdb\" (UID: \"ca183057-4337-4dfb-a5ec-e8945fe74cca\") " pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.899965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd" (OuterVolumeSpecName: "kube-api-access-brjwd") pod "66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" (UID: "66b955f3-66bb-42fe-b7de-e5a07bbc4bd1"). InnerVolumeSpecName "kube-api-access-brjwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.915424 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkx5q\" (UniqueName: \"kubernetes.io/projected/ca183057-4337-4dfb-a5ec-e8945fe74cca-kube-api-access-wkx5q\") pod \"openstack-operator-index-z9fdb\" (UID: \"ca183057-4337-4dfb-a5ec-e8945fe74cca\") " pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.992028 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.992244 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.256559 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z9fdb"] Jan 26 23:22:36 crc kubenswrapper[4995]: W0126 23:22:36.268779 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca183057_4337_4dfb_a5ec_e8945fe74cca.slice/crio-c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247 WatchSource:0}: Error finding container c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247: Status 404 returned error can't find the container with id c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247 Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.390288 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z9fdb" event={"ID":"ca183057-4337-4dfb-a5ec-e8945fe74cca","Type":"ContainerStarted","Data":"c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247"} Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395589 4995 generic.go:334] "Generic (PLEG): container finished" podID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" exitCode=0 Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerDied","Data":"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be"} Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerDied","Data":"aeca02f391a8dd03ed2ab5137c9d680b8ac5c3e8670fba3b9525f1ab0df35c29"} Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395690 4995 scope.go:117] "RemoveContainer" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395808 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.457000 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.463682 4995 scope.go:117] "RemoveContainer" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" Jan 26 23:22:36 crc kubenswrapper[4995]: E0126 23:22:36.464315 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be\": container with ID starting with 39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be not found: ID does not exist" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.464343 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be"} err="failed to get container status \"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be\": rpc error: code = NotFound desc = could not find container \"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be\": container with ID starting with 39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be not found: ID does not exist" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.468076 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.525960 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" path="/var/lib/kubelet/pods/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1/volumes" Jan 26 23:22:37 crc kubenswrapper[4995]: I0126 23:22:37.404582 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z9fdb" event={"ID":"ca183057-4337-4dfb-a5ec-e8945fe74cca","Type":"ContainerStarted","Data":"a5a2cd6aba9978870b845040b93b52033cbb36f75ecea9df7f7bb74684c28918"} Jan 26 23:22:37 crc kubenswrapper[4995]: I0126 23:22:37.425442 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z9fdb" podStartSLOduration=2.372414801 podStartE2EDuration="2.425424063s" podCreationTimestamp="2026-01-26 23:22:35 +0000 UTC" firstStartedPulling="2026-01-26 23:22:36.274810235 +0000 UTC m=+860.439517750" lastFinishedPulling="2026-01-26 23:22:36.327819537 +0000 UTC m=+860.492527012" observedRunningTime="2026-01-26 23:22:37.420616123 +0000 UTC m=+861.585323598" watchObservedRunningTime="2026-01-26 23:22:37.425424063 +0000 UTC m=+861.590131528" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.277597 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:43 crc kubenswrapper[4995]: E0126 23:22:43.278582 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.278617 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.278926 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.281053 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.288076 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.399823 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.399896 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.399930 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.501581 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.501646 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.501685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.502051 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.502164 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.521364 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.626873 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.910623 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:44 crc kubenswrapper[4995]: I0126 23:22:44.458203 4995 generic.go:334] "Generic (PLEG): container finished" podID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" exitCode=0 Jan 26 23:22:44 crc kubenswrapper[4995]: I0126 23:22:44.458340 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416"} Jan 26 23:22:44 crc kubenswrapper[4995]: I0126 23:22:44.459371 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerStarted","Data":"33dffb9a73e1025af441a109e2fb4fb29fb662cdba9a46e660ae5dca52813904"} Jan 26 23:22:45 crc kubenswrapper[4995]: I0126 23:22:45.466539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerStarted","Data":"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869"} Jan 26 23:22:45 crc kubenswrapper[4995]: I0126 23:22:45.993008 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:45 crc kubenswrapper[4995]: I0126 23:22:45.993451 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.033025 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.480344 4995 generic.go:334] "Generic (PLEG): container finished" podID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" exitCode=0 Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.481289 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869"} Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.534033 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:47 crc kubenswrapper[4995]: I0126 23:22:47.490561 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerStarted","Data":"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1"} Jan 26 23:22:47 crc kubenswrapper[4995]: I0126 23:22:47.515873 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lhq9l" podStartSLOduration=2.085659349 podStartE2EDuration="4.515850853s" podCreationTimestamp="2026-01-26 23:22:43 +0000 UTC" firstStartedPulling="2026-01-26 23:22:44.460318487 +0000 UTC m=+868.625025952" lastFinishedPulling="2026-01-26 23:22:46.890509951 +0000 UTC m=+871.055217456" observedRunningTime="2026-01-26 23:22:47.510888249 +0000 UTC m=+871.675595744" watchObservedRunningTime="2026-01-26 23:22:47.515850853 +0000 UTC m=+871.680558338" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.137130 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc"] Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.142020 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.145571 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jtm6l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.151399 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc"] Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.249520 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.250060 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.250172 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.352166 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.352272 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.352391 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.353003 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.353244 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.383267 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.483981 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.627495 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.629329 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.702521 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.740911 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc"] Jan 26 23:22:53 crc kubenswrapper[4995]: W0126 23:22:53.753958 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbffe6c_1d98_4769_8f02_7a966a63ef38.slice/crio-ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab WatchSource:0}: Error finding container ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab: Status 404 returned error can't find the container with id ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.557259 4995 generic.go:334] "Generic (PLEG): container finished" podID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerID="0f8c0e69350ba0b54b859b17faf545edf4216ff04a06e1db2646a7a546558883" exitCode=0 Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.559215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"0f8c0e69350ba0b54b859b17faf545edf4216ff04a06e1db2646a7a546558883"} Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.559262 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerStarted","Data":"ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab"} Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.614496 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:55 crc kubenswrapper[4995]: I0126 23:22:55.568128 4995 generic.go:334] "Generic (PLEG): container finished" podID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerID="775ecca36989c1c8a8c882b9b28e779fd41acf55a8dd843512e2e45dfe1810d8" exitCode=0 Jan 26 23:22:55 crc kubenswrapper[4995]: I0126 23:22:55.568215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"775ecca36989c1c8a8c882b9b28e779fd41acf55a8dd843512e2e45dfe1810d8"} Jan 26 23:22:56 crc kubenswrapper[4995]: I0126 23:22:56.581300 4995 generic.go:334] "Generic (PLEG): container finished" podID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerID="132c6002305c662a0517ffe743dbadb9323be0431c2c4b18edf8f66b79d2edef" exitCode=0 Jan 26 23:22:56 crc kubenswrapper[4995]: I0126 23:22:56.581464 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"132c6002305c662a0517ffe743dbadb9323be0431c2c4b18edf8f66b79d2edef"} Jan 26 23:22:56 crc kubenswrapper[4995]: I0126 23:22:56.660131 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.586997 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lhq9l" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" containerID="cri-o://3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" gracePeriod=2 Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.863681 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.941192 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.942179 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle" (OuterVolumeSpecName: "bundle") pod "1cbffe6c-1d98-4769-8f02-7a966a63ef38" (UID: "1cbffe6c-1d98-4769-8f02-7a966a63ef38"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.942236 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.942333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.951077 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.951574 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt" (OuterVolumeSpecName: "kube-api-access-gsqmt") pod "1cbffe6c-1d98-4769-8f02-7a966a63ef38" (UID: "1cbffe6c-1d98-4769-8f02-7a966a63ef38"). InnerVolumeSpecName "kube-api-access-gsqmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.959471 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util" (OuterVolumeSpecName: "util") pod "1cbffe6c-1d98-4769-8f02-7a966a63ef38" (UID: "1cbffe6c-1d98-4769-8f02-7a966a63ef38"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.995397 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.052308 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"3d2e5512-c49b-4082-a5b0-44df42a443ee\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.052687 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"3d2e5512-c49b-4082-a5b0-44df42a443ee\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.052983 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"3d2e5512-c49b-4082-a5b0-44df42a443ee\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.053527 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.054093 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.058358 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct" (OuterVolumeSpecName: "kube-api-access-2l9ct") pod "3d2e5512-c49b-4082-a5b0-44df42a443ee" (UID: "3d2e5512-c49b-4082-a5b0-44df42a443ee"). InnerVolumeSpecName "kube-api-access-2l9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.059054 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities" (OuterVolumeSpecName: "utilities") pod "3d2e5512-c49b-4082-a5b0-44df42a443ee" (UID: "3d2e5512-c49b-4082-a5b0-44df42a443ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.125843 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d2e5512-c49b-4082-a5b0-44df42a443ee" (UID: "3d2e5512-c49b-4082-a5b0-44df42a443ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.158968 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.159005 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.159015 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597587 4995 generic.go:334] "Generic (PLEG): container finished" podID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" exitCode=0 Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597670 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1"} Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597709 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"33dffb9a73e1025af441a109e2fb4fb29fb662cdba9a46e660ae5dca52813904"} Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597732 4995 scope.go:117] "RemoveContainer" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597742 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.604421 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab"} Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.604476 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.604566 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.631352 4995 scope.go:117] "RemoveContainer" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.631475 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.634897 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.652520 4995 scope.go:117] "RemoveContainer" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.677493 4995 scope.go:117] "RemoveContainer" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" Jan 26 23:22:58 crc kubenswrapper[4995]: E0126 23:22:58.678133 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1\": container with ID starting with 3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1 not found: ID does not exist" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.678346 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1"} err="failed to get container status \"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1\": rpc error: code = NotFound desc = could not find container \"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1\": container with ID starting with 3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1 not found: ID does not exist" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.678497 4995 scope.go:117] "RemoveContainer" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" Jan 26 23:22:58 crc kubenswrapper[4995]: E0126 23:22:58.679151 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869\": container with ID starting with 0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869 not found: ID does not exist" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.679238 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869"} err="failed to get container status \"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869\": rpc error: code = NotFound desc = could not find container \"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869\": container with ID starting with 0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869 not found: ID does not exist" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.679305 4995 scope.go:117] "RemoveContainer" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" Jan 26 23:22:58 crc kubenswrapper[4995]: E0126 23:22:58.679584 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416\": container with ID starting with 16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416 not found: ID does not exist" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.679732 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416"} err="failed to get container status \"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416\": rpc error: code = NotFound desc = could not find container \"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416\": container with ID starting with 16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416 not found: ID does not exist" Jan 26 23:23:00 crc kubenswrapper[4995]: I0126 23:23:00.533026 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" path="/var/lib/kubelet/pods/3d2e5512-c49b-4082-a5b0-44df42a443ee/volumes" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079389 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079877 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="pull" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079888 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="pull" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079899 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="util" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079905 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="util" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079915 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-content" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079921 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-content" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079934 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-utilities" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079940 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-utilities" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079949 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="extract" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079954 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="extract" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079965 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079971 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.080070 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="extract" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.080081 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.080504 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.082637 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-96btn" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.111885 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.155331 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"openstack-operator-controller-init-f8d7d87cb-d4ktp\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.257064 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"openstack-operator-controller-init-f8d7d87cb-d4ktp\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.279357 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"openstack-operator-controller-init-f8d7d87cb-d4ktp\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.403713 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.837531 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:23:05 crc kubenswrapper[4995]: W0126 23:23:05.851888 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod892f33f6_3409_407d_b85b_922b8bdbfa16.slice/crio-c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0 WatchSource:0}: Error finding container c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0: Status 404 returned error can't find the container with id c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0 Jan 26 23:23:06 crc kubenswrapper[4995]: I0126 23:23:06.665574 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerStarted","Data":"c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0"} Jan 26 23:23:10 crc kubenswrapper[4995]: I0126 23:23:10.694193 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerStarted","Data":"6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114"} Jan 26 23:23:10 crc kubenswrapper[4995]: I0126 23:23:10.695047 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:10 crc kubenswrapper[4995]: I0126 23:23:10.744946 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" podStartSLOduration=1.76987821 podStartE2EDuration="5.744918799s" podCreationTimestamp="2026-01-26 23:23:05 +0000 UTC" firstStartedPulling="2026-01-26 23:23:05.854974823 +0000 UTC m=+890.019682328" lastFinishedPulling="2026-01-26 23:23:09.830015442 +0000 UTC m=+893.994722917" observedRunningTime="2026-01-26 23:23:10.733414792 +0000 UTC m=+894.898122297" watchObservedRunningTime="2026-01-26 23:23:10.744918799 +0000 UTC m=+894.909626294" Jan 26 23:23:15 crc kubenswrapper[4995]: I0126 23:23:15.408146 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:16 crc kubenswrapper[4995]: I0126 23:23:16.866842 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:16 crc kubenswrapper[4995]: I0126 23:23:16.868387 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:16 crc kubenswrapper[4995]: I0126 23:23:16.886139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.034041 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.034135 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.034372 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.135948 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136022 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136057 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136482 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136523 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.155508 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.184070 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.403500 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.759445 4995 generic.go:334] "Generic (PLEG): container finished" podID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" exitCode=0 Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.759539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a"} Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.759733 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerStarted","Data":"68f5c163dd5a5f769fd0700b2fbf0f86c445afc1ada1ec7b333f08125ea3f657"} Jan 26 23:23:18 crc kubenswrapper[4995]: I0126 23:23:18.780476 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerStarted","Data":"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d"} Jan 26 23:23:19 crc kubenswrapper[4995]: I0126 23:23:19.788823 4995 generic.go:334] "Generic (PLEG): container finished" podID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" exitCode=0 Jan 26 23:23:19 crc kubenswrapper[4995]: I0126 23:23:19.788907 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d"} Jan 26 23:23:21 crc kubenswrapper[4995]: I0126 23:23:21.805234 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerStarted","Data":"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba"} Jan 26 23:23:21 crc kubenswrapper[4995]: I0126 23:23:21.824763 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qpbtn" podStartSLOduration=2.924478223 podStartE2EDuration="5.824748646s" podCreationTimestamp="2026-01-26 23:23:16 +0000 UTC" firstStartedPulling="2026-01-26 23:23:17.761340982 +0000 UTC m=+901.926048457" lastFinishedPulling="2026-01-26 23:23:20.661611375 +0000 UTC m=+904.826318880" observedRunningTime="2026-01-26 23:23:21.820970452 +0000 UTC m=+905.985677917" watchObservedRunningTime="2026-01-26 23:23:21.824748646 +0000 UTC m=+905.989456111" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.190117 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.191561 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.227670 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.315640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.315843 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.315991 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.417505 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.417592 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.417634 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.418202 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.418254 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.459043 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.550241 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.941847 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:24 crc kubenswrapper[4995]: I0126 23:23:24.830854 4995 generic.go:334] "Generic (PLEG): container finished" podID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" exitCode=0 Jan 26 23:23:24 crc kubenswrapper[4995]: I0126 23:23:24.830944 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b"} Jan 26 23:23:24 crc kubenswrapper[4995]: I0126 23:23:24.831197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerStarted","Data":"04da271e4f7505c2ffd196e4561fda52d5add96fbb0f643f634bb2bd36cc7757"} Jan 26 23:23:25 crc kubenswrapper[4995]: I0126 23:23:25.844529 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerStarted","Data":"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492"} Jan 26 23:23:26 crc kubenswrapper[4995]: I0126 23:23:26.852960 4995 generic.go:334] "Generic (PLEG): container finished" podID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" exitCode=0 Jan 26 23:23:26 crc kubenswrapper[4995]: I0126 23:23:26.852995 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492"} Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.191340 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.191491 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.249228 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.860806 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerStarted","Data":"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca"} Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.902934 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7j9zc" podStartSLOduration=2.479602888 podStartE2EDuration="4.902915048s" podCreationTimestamp="2026-01-26 23:23:23 +0000 UTC" firstStartedPulling="2026-01-26 23:23:24.832627284 +0000 UTC m=+908.997334759" lastFinishedPulling="2026-01-26 23:23:27.255939454 +0000 UTC m=+911.420646919" observedRunningTime="2026-01-26 23:23:27.898151999 +0000 UTC m=+912.062859464" watchObservedRunningTime="2026-01-26 23:23:27.902915048 +0000 UTC m=+912.067622513" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.963148 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:29 crc kubenswrapper[4995]: I0126 23:23:29.580431 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:30 crc kubenswrapper[4995]: I0126 23:23:30.883303 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qpbtn" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" containerID="cri-o://76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" gracePeriod=2 Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.795528 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892241 4995 generic.go:334] "Generic (PLEG): container finished" podID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" exitCode=0 Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892315 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba"} Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892344 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892404 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"68f5c163dd5a5f769fd0700b2fbf0f86c445afc1ada1ec7b333f08125ea3f657"} Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892436 4995 scope.go:117] "RemoveContainer" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.913530 4995 scope.go:117] "RemoveContainer" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.935228 4995 scope.go:117] "RemoveContainer" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.938355 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"44311f3a-63ea-444c-bda7-470d8c27fbcb\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.938719 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"44311f3a-63ea-444c-bda7-470d8c27fbcb\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.938825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"44311f3a-63ea-444c-bda7-470d8c27fbcb\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.941065 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities" (OuterVolumeSpecName: "utilities") pod "44311f3a-63ea-444c-bda7-470d8c27fbcb" (UID: "44311f3a-63ea-444c-bda7-470d8c27fbcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.959293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7" (OuterVolumeSpecName: "kube-api-access-pknq7") pod "44311f3a-63ea-444c-bda7-470d8c27fbcb" (UID: "44311f3a-63ea-444c-bda7-470d8c27fbcb"). InnerVolumeSpecName "kube-api-access-pknq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.990276 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44311f3a-63ea-444c-bda7-470d8c27fbcb" (UID: "44311f3a-63ea-444c-bda7-470d8c27fbcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.996915 4995 scope.go:117] "RemoveContainer" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" Jan 26 23:23:31 crc kubenswrapper[4995]: E0126 23:23:31.998940 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba\": container with ID starting with 76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba not found: ID does not exist" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.998982 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba"} err="failed to get container status \"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba\": rpc error: code = NotFound desc = could not find container \"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba\": container with ID starting with 76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba not found: ID does not exist" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999012 4995 scope.go:117] "RemoveContainer" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" Jan 26 23:23:31 crc kubenswrapper[4995]: E0126 23:23:31.999321 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d\": container with ID starting with 439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d not found: ID does not exist" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999373 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d"} err="failed to get container status \"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d\": rpc error: code = NotFound desc = could not find container \"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d\": container with ID starting with 439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d not found: ID does not exist" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999406 4995 scope.go:117] "RemoveContainer" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" Jan 26 23:23:31 crc kubenswrapper[4995]: E0126 23:23:31.999737 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a\": container with ID starting with c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a not found: ID does not exist" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999779 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a"} err="failed to get container status \"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a\": rpc error: code = NotFound desc = could not find container \"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a\": container with ID starting with c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a not found: ID does not exist" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.042172 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.042210 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.042225 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.223426 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.231650 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.529995 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" path="/var/lib/kubelet/pods/44311f3a-63ea-444c-bda7-470d8c27fbcb/volumes" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.680973 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8"] Jan 26 23:23:32 crc kubenswrapper[4995]: E0126 23:23:32.681215 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-content" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681226 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-content" Jan 26 23:23:32 crc kubenswrapper[4995]: E0126 23:23:32.681240 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681246 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" Jan 26 23:23:32 crc kubenswrapper[4995]: E0126 23:23:32.681262 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-utilities" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681269 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-utilities" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681368 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681766 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.683930 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vgt2l" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.697510 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.698558 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.700186 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rf92p" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.717565 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.718971 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.723011 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tbnjf" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.726778 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.742418 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.743619 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.745395 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j8vtt" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.765417 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.783645 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.796052 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.798991 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.803237 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6xqrc" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.808593 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.836784 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.854964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmq6\" (UniqueName: \"kubernetes.io/projected/4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6-kube-api-access-xxmq6\") pod \"glance-operator-controller-manager-67dd55ff59-gdvdp\" (UID: \"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.855129 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whrdd\" (UniqueName: \"kubernetes.io/projected/90ae2b4f-43e9-4a37-abc5-d90e958e540b-kube-api-access-whrdd\") pod \"designate-operator-controller-manager-77554cdc5c-kgv2f\" (UID: \"90ae2b4f-43e9-4a37-abc5-d90e958e540b\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.855209 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnm6s\" (UniqueName: \"kubernetes.io/projected/c5dd6b1a-1515-4ad6-b89e-0c7253a71281-kube-api-access-gnm6s\") pod \"barbican-operator-controller-manager-6987f66698-x2fg8\" (UID: \"c5dd6b1a-1515-4ad6-b89e-0c7253a71281\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.855317 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/70dc0d96-2ba1-487e-8ffc-a98725e002c4-kube-api-access-zthkc\") pod \"cinder-operator-controller-manager-655bf9cfbb-pzzq9\" (UID: \"70dc0d96-2ba1-487e-8ffc-a98725e002c4\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.860529 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.862627 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.876958 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wjvbd" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.885586 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.910262 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.911372 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.914690 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b6wmk" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.914839 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.949565 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956356 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnm6s\" (UniqueName: \"kubernetes.io/projected/c5dd6b1a-1515-4ad6-b89e-0c7253a71281-kube-api-access-gnm6s\") pod \"barbican-operator-controller-manager-6987f66698-x2fg8\" (UID: \"c5dd6b1a-1515-4ad6-b89e-0c7253a71281\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956431 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/70dc0d96-2ba1-487e-8ffc-a98725e002c4-kube-api-access-zthkc\") pod \"cinder-operator-controller-manager-655bf9cfbb-pzzq9\" (UID: \"70dc0d96-2ba1-487e-8ffc-a98725e002c4\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956466 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9t7n\" (UniqueName: \"kubernetes.io/projected/e29f1042-97e4-430c-a262-53ab3cca40d9-kube-api-access-c9t7n\") pod \"heat-operator-controller-manager-954b94f75-7q5kj\" (UID: \"e29f1042-97e4-430c-a262-53ab3cca40d9\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956511 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpn2\" (UniqueName: \"kubernetes.io/projected/bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e-kube-api-access-tkpn2\") pod \"horizon-operator-controller-manager-77d5c5b54f-r7mgm\" (UID: \"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956538 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmq6\" (UniqueName: \"kubernetes.io/projected/4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6-kube-api-access-xxmq6\") pod \"glance-operator-controller-manager-67dd55ff59-gdvdp\" (UID: \"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956561 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whrdd\" (UniqueName: \"kubernetes.io/projected/90ae2b4f-43e9-4a37-abc5-d90e958e540b-kube-api-access-whrdd\") pod \"designate-operator-controller-manager-77554cdc5c-kgv2f\" (UID: \"90ae2b4f-43e9-4a37-abc5-d90e958e540b\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.974163 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.975179 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.983052 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whrdd\" (UniqueName: \"kubernetes.io/projected/90ae2b4f-43e9-4a37-abc5-d90e958e540b-kube-api-access-whrdd\") pod \"designate-operator-controller-manager-77554cdc5c-kgv2f\" (UID: \"90ae2b4f-43e9-4a37-abc5-d90e958e540b\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.990921 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-z4krd" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.000619 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnm6s\" (UniqueName: \"kubernetes.io/projected/c5dd6b1a-1515-4ad6-b89e-0c7253a71281-kube-api-access-gnm6s\") pod \"barbican-operator-controller-manager-6987f66698-x2fg8\" (UID: \"c5dd6b1a-1515-4ad6-b89e-0c7253a71281\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.000632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/70dc0d96-2ba1-487e-8ffc-a98725e002c4-kube-api-access-zthkc\") pod \"cinder-operator-controller-manager-655bf9cfbb-pzzq9\" (UID: \"70dc0d96-2ba1-487e-8ffc-a98725e002c4\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.000633 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmq6\" (UniqueName: \"kubernetes.io/projected/4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6-kube-api-access-xxmq6\") pod \"glance-operator-controller-manager-67dd55ff59-gdvdp\" (UID: \"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.013991 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.017667 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.033288 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.034715 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.036864 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.038906 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zzdgl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.052598 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.053660 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.056281 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sqc4z" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058205 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9t7n\" (UniqueName: \"kubernetes.io/projected/e29f1042-97e4-430c-a262-53ab3cca40d9-kube-api-access-c9t7n\") pod \"heat-operator-controller-manager-954b94f75-7q5kj\" (UID: \"e29f1042-97e4-430c-a262-53ab3cca40d9\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058248 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvms\" (UniqueName: \"kubernetes.io/projected/3a2f8d86-155b-476b-86c4-fda3eb595fc9-kube-api-access-wgvms\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058294 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpn2\" (UniqueName: \"kubernetes.io/projected/bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e-kube-api-access-tkpn2\") pod \"horizon-operator-controller-manager-77d5c5b54f-r7mgm\" (UID: \"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058334 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6crnf\" (UniqueName: \"kubernetes.io/projected/555394ee-9ad5-417f-9698-646ba1ddc5f2-kube-api-access-6crnf\") pod \"ironic-operator-controller-manager-768b776ffb-6gtf9\" (UID: \"555394ee-9ad5-417f-9698-646ba1ddc5f2\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.068312 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.074572 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.075170 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9t7n\" (UniqueName: \"kubernetes.io/projected/e29f1042-97e4-430c-a262-53ab3cca40d9-kube-api-access-c9t7n\") pod \"heat-operator-controller-manager-954b94f75-7q5kj\" (UID: \"e29f1042-97e4-430c-a262-53ab3cca40d9\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.083244 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.084560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpn2\" (UniqueName: \"kubernetes.io/projected/bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e-kube-api-access-tkpn2\") pod \"horizon-operator-controller-manager-77d5c5b54f-r7mgm\" (UID: \"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.091372 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.092404 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.098624 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.099201 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8gqwc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.108222 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.110372 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4krhf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.129442 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.138722 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.148125 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.155306 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.156498 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.158645 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cbvjx" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159436 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6crnf\" (UniqueName: \"kubernetes.io/projected/555394ee-9ad5-417f-9698-646ba1ddc5f2-kube-api-access-6crnf\") pod \"ironic-operator-controller-manager-768b776ffb-6gtf9\" (UID: \"555394ee-9ad5-417f-9698-646ba1ddc5f2\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159483 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcjzd\" (UniqueName: \"kubernetes.io/projected/0d39c5fc-e526-46e8-8773-6bf87e938b06-kube-api-access-jcjzd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh\" (UID: \"0d39c5fc-e526-46e8-8773-6bf87e938b06\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159527 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvms\" (UniqueName: \"kubernetes.io/projected/3a2f8d86-155b-476b-86c4-fda3eb595fc9-kube-api-access-wgvms\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159563 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kklj\" (UniqueName: \"kubernetes.io/projected/fd2183e6-a9e4-44b8-861f-9a545aac1c12-kube-api-access-9kklj\") pod \"manila-operator-controller-manager-849fcfbb6b-w2gfg\" (UID: \"fd2183e6-a9e4-44b8-861f-9a545aac1c12\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159607 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcldk\" (UniqueName: \"kubernetes.io/projected/235cf5b2-2094-4345-bf37-edbcb2e5e48f-kube-api-access-fcldk\") pod \"keystone-operator-controller-manager-55f684fd56-gzjxj\" (UID: \"235cf5b2-2094-4345-bf37-edbcb2e5e48f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159648 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.159827 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.159870 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:33.659853205 +0000 UTC m=+917.824560670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.163236 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.170429 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.171561 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.173800 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9l5z9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.176441 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.181882 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6crnf\" (UniqueName: \"kubernetes.io/projected/555394ee-9ad5-417f-9698-646ba1ddc5f2-kube-api-access-6crnf\") pod \"ironic-operator-controller-manager-768b776ffb-6gtf9\" (UID: \"555394ee-9ad5-417f-9698-646ba1ddc5f2\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.187646 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvms\" (UniqueName: \"kubernetes.io/projected/3a2f8d86-155b-476b-86c4-fda3eb595fc9-kube-api-access-wgvms\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.189077 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.190716 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.192355 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lwlfd" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.196731 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.197572 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.205878 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.206082 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-f9r8r" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.207545 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.214599 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.215938 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.216011 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.218725 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j5sdv" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.224204 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.237187 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.255607 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.257973 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.260820 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9qbnq" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261777 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98vr\" (UniqueName: \"kubernetes.io/projected/4e9b965f-6060-43e7-aa1c-b73472075bae-kube-api-access-j98vr\") pod \"nova-operator-controller-manager-7f54b7d6d4-cf7gh\" (UID: \"4e9b965f-6060-43e7-aa1c-b73472075bae\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261810 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/cfbd9d32-25ae-4369-8e16-ce174c0802dc-kube-api-access-nff6t\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261887 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcjzd\" (UniqueName: \"kubernetes.io/projected/0d39c5fc-e526-46e8-8773-6bf87e938b06-kube-api-access-jcjzd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh\" (UID: \"0d39c5fc-e526-46e8-8773-6bf87e938b06\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261918 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrhd\" (UniqueName: \"kubernetes.io/projected/1b364747-4f4c-4431-becf-0f2b30bc9d20-kube-api-access-5mrhd\") pod \"ovn-operator-controller-manager-6f75f45d54-z899w\" (UID: \"1b364747-4f4c-4431-becf-0f2b30bc9d20\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261943 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fncvh\" (UniqueName: \"kubernetes.io/projected/ce22ba19-581c-4f75-9bd6-4de0538779a2-kube-api-access-fncvh\") pod \"octavia-operator-controller-manager-756f86fc74-7s666\" (UID: \"ce22ba19-581c-4f75-9bd6-4de0538779a2\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261969 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6cv\" (UniqueName: \"kubernetes.io/projected/03047106-c820-43c2-bee1-c8b1fb3a0a0c-kube-api-access-xm6cv\") pod \"neutron-operator-controller-manager-7ffd8d76d4-p47jp\" (UID: \"03047106-c820-43c2-bee1-c8b1fb3a0a0c\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.262010 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kklj\" (UniqueName: \"kubernetes.io/projected/fd2183e6-a9e4-44b8-861f-9a545aac1c12-kube-api-access-9kklj\") pod \"manila-operator-controller-manager-849fcfbb6b-w2gfg\" (UID: \"fd2183e6-a9e4-44b8-861f-9a545aac1c12\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.262042 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcldk\" (UniqueName: \"kubernetes.io/projected/235cf5b2-2094-4345-bf37-edbcb2e5e48f-kube-api-access-fcldk\") pod \"keystone-operator-controller-manager-55f684fd56-gzjxj\" (UID: \"235cf5b2-2094-4345-bf37-edbcb2e5e48f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.262072 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.270575 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.300873 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.316283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kklj\" (UniqueName: \"kubernetes.io/projected/fd2183e6-a9e4-44b8-861f-9a545aac1c12-kube-api-access-9kklj\") pod \"manila-operator-controller-manager-849fcfbb6b-w2gfg\" (UID: \"fd2183e6-a9e4-44b8-861f-9a545aac1c12\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.321427 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcldk\" (UniqueName: \"kubernetes.io/projected/235cf5b2-2094-4345-bf37-edbcb2e5e48f-kube-api-access-fcldk\") pod \"keystone-operator-controller-manager-55f684fd56-gzjxj\" (UID: \"235cf5b2-2094-4345-bf37-edbcb2e5e48f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.329386 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcjzd\" (UniqueName: \"kubernetes.io/projected/0d39c5fc-e526-46e8-8773-6bf87e938b06-kube-api-access-jcjzd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh\" (UID: \"0d39c5fc-e526-46e8-8773-6bf87e938b06\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363766 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9wr6\" (UniqueName: \"kubernetes.io/projected/aba99191-8a3a-47dc-8dca-136de682a567-kube-api-access-k9wr6\") pod \"swift-operator-controller-manager-547cbdb99f-b4kzb\" (UID: \"aba99191-8a3a-47dc-8dca-136de682a567\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363815 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363847 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqh5\" (UniqueName: \"kubernetes.io/projected/931ac40b-6695-41c7-9d8f-c8eefca6e587-kube-api-access-rjqh5\") pod \"placement-operator-controller-manager-79d5ccc684-5zhml\" (UID: \"931ac40b-6695-41c7-9d8f-c8eefca6e587\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363968 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j98vr\" (UniqueName: \"kubernetes.io/projected/4e9b965f-6060-43e7-aa1c-b73472075bae-kube-api-access-j98vr\") pod \"nova-operator-controller-manager-7f54b7d6d4-cf7gh\" (UID: \"4e9b965f-6060-43e7-aa1c-b73472075bae\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363992 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/cfbd9d32-25ae-4369-8e16-ce174c0802dc-kube-api-access-nff6t\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.364031 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mrhd\" (UniqueName: \"kubernetes.io/projected/1b364747-4f4c-4431-becf-0f2b30bc9d20-kube-api-access-5mrhd\") pod \"ovn-operator-controller-manager-6f75f45d54-z899w\" (UID: \"1b364747-4f4c-4431-becf-0f2b30bc9d20\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.364053 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fncvh\" (UniqueName: \"kubernetes.io/projected/ce22ba19-581c-4f75-9bd6-4de0538779a2-kube-api-access-fncvh\") pod \"octavia-operator-controller-manager-756f86fc74-7s666\" (UID: \"ce22ba19-581c-4f75-9bd6-4de0538779a2\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.364076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm6cv\" (UniqueName: \"kubernetes.io/projected/03047106-c820-43c2-bee1-c8b1fb3a0a0c-kube-api-access-xm6cv\") pod \"neutron-operator-controller-manager-7ffd8d76d4-p47jp\" (UID: \"03047106-c820-43c2-bee1-c8b1fb3a0a0c\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.364229 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.364283 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:33.864265299 +0000 UTC m=+918.028972764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.394607 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fncvh\" (UniqueName: \"kubernetes.io/projected/ce22ba19-581c-4f75-9bd6-4de0538779a2-kube-api-access-fncvh\") pod \"octavia-operator-controller-manager-756f86fc74-7s666\" (UID: \"ce22ba19-581c-4f75-9bd6-4de0538779a2\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.397964 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/cfbd9d32-25ae-4369-8e16-ce174c0802dc-kube-api-access-nff6t\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.398094 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j98vr\" (UniqueName: \"kubernetes.io/projected/4e9b965f-6060-43e7-aa1c-b73472075bae-kube-api-access-j98vr\") pod \"nova-operator-controller-manager-7f54b7d6d4-cf7gh\" (UID: \"4e9b965f-6060-43e7-aa1c-b73472075bae\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.402388 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mrhd\" (UniqueName: \"kubernetes.io/projected/1b364747-4f4c-4431-becf-0f2b30bc9d20-kube-api-access-5mrhd\") pod \"ovn-operator-controller-manager-6f75f45d54-z899w\" (UID: \"1b364747-4f4c-4431-becf-0f2b30bc9d20\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.405634 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.406608 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.411956 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k86gl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.413260 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm6cv\" (UniqueName: \"kubernetes.io/projected/03047106-c820-43c2-bee1-c8b1fb3a0a0c-kube-api-access-xm6cv\") pod \"neutron-operator-controller-manager-7ffd8d76d4-p47jp\" (UID: \"03047106-c820-43c2-bee1-c8b1fb3a0a0c\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.417284 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.453386 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.462703 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.464855 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.465015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjqh5\" (UniqueName: \"kubernetes.io/projected/931ac40b-6695-41c7-9d8f-c8eefca6e587-kube-api-access-rjqh5\") pod \"placement-operator-controller-manager-79d5ccc684-5zhml\" (UID: \"931ac40b-6695-41c7-9d8f-c8eefca6e587\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.465130 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnsn\" (UniqueName: \"kubernetes.io/projected/fd5d672d-1c27-4782-bbf3-c6d936a8c9bb-kube-api-access-spnsn\") pod \"telemetry-operator-controller-manager-799bc87c89-bmdgt\" (UID: \"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.465159 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9wr6\" (UniqueName: \"kubernetes.io/projected/aba99191-8a3a-47dc-8dca-136de682a567-kube-api-access-k9wr6\") pod \"swift-operator-controller-manager-547cbdb99f-b4kzb\" (UID: \"aba99191-8a3a-47dc-8dca-136de682a567\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.468475 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n9dct" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.470229 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.475465 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.496677 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.497651 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9wr6\" (UniqueName: \"kubernetes.io/projected/aba99191-8a3a-47dc-8dca-136de682a567-kube-api-access-k9wr6\") pod \"swift-operator-controller-manager-547cbdb99f-b4kzb\" (UID: \"aba99191-8a3a-47dc-8dca-136de682a567\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.498025 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjqh5\" (UniqueName: \"kubernetes.io/projected/931ac40b-6695-41c7-9d8f-c8eefca6e587-kube-api-access-rjqh5\") pod \"placement-operator-controller-manager-79d5ccc684-5zhml\" (UID: \"931ac40b-6695-41c7-9d8f-c8eefca6e587\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.508124 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.524190 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.525121 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.528025 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fw4c6" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.528223 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.528285 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1f5873_cf2b_4fd3_a83e_97611d3ee0e6.slice/crio-ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3 WatchSource:0}: Error finding container ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3: Status 404 returned error can't find the container with id ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.534235 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.542036 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.542975 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.545274 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.546322 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.546585 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8npzj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.552314 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.554794 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.555833 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.556322 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.566856 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxv76\" (UniqueName: \"kubernetes.io/projected/b60b13f0-97c0-42b9-85fd-2a51218c9ac1-kube-api-access-mxv76\") pod \"test-operator-controller-manager-69797bbcbd-kjmpf\" (UID: \"b60b13f0-97c0-42b9-85fd-2a51218c9ac1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.566987 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnsn\" (UniqueName: \"kubernetes.io/projected/fd5d672d-1c27-4782-bbf3-c6d936a8c9bb-kube-api-access-spnsn\") pod \"telemetry-operator-controller-manager-799bc87c89-bmdgt\" (UID: \"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.575191 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.585205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnsn\" (UniqueName: \"kubernetes.io/projected/fd5d672d-1c27-4782-bbf3-c6d936a8c9bb-kube-api-access-spnsn\") pod \"telemetry-operator-controller-manager-799bc87c89-bmdgt\" (UID: \"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.597366 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.599764 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.600631 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.601379 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70dc0d96_2ba1_487e_8ffc_a98725e002c4.slice/crio-6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646 WatchSource:0}: Error finding container 6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646: Status 404 returned error can't find the container with id 6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.603397 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-v6k5n" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.605479 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.614542 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.626524 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.629162 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.630947 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.663554 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668079 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbx5\" (UniqueName: \"kubernetes.io/projected/03478ac9-bd6b-4726-86b4-cd29045b6dc0-kube-api-access-lbbx5\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668131 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlvz\" (UniqueName: \"kubernetes.io/projected/a0641fd3-88a7-4fb2-93f9-ffce84aadef2-kube-api-access-ghlvz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dk2dl\" (UID: \"a0641fd3-88a7-4fb2-93f9-ffce84aadef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668162 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxv76\" (UniqueName: \"kubernetes.io/projected/b60b13f0-97c0-42b9-85fd-2a51218c9ac1-kube-api-access-mxv76\") pod \"test-operator-controller-manager-69797bbcbd-kjmpf\" (UID: \"b60b13f0-97c0-42b9-85fd-2a51218c9ac1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668196 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668228 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"watcher-operator-controller-manager-7b8f755c7-tlv6g\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668244 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668306 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.668417 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.668455 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.668442104 +0000 UTC m=+918.833149569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.689165 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxv76\" (UniqueName: \"kubernetes.io/projected/b60b13f0-97c0-42b9-85fd-2a51218c9ac1-kube-api-access-mxv76\") pod \"test-operator-controller-manager-69797bbcbd-kjmpf\" (UID: \"b60b13f0-97c0-42b9-85fd-2a51218c9ac1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.725230 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.767370 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770824 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbbx5\" (UniqueName: \"kubernetes.io/projected/03478ac9-bd6b-4726-86b4-cd29045b6dc0-kube-api-access-lbbx5\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlvz\" (UniqueName: \"kubernetes.io/projected/a0641fd3-88a7-4fb2-93f9-ffce84aadef2-kube-api-access-ghlvz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dk2dl\" (UID: \"a0641fd3-88a7-4fb2-93f9-ffce84aadef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770905 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"watcher-operator-controller-manager-7b8f755c7-tlv6g\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770955 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771092 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771148 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.271130738 +0000 UTC m=+918.435838203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771637 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771668 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.271659572 +0000 UTC m=+918.436367037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.782351 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.800146 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.802690 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"watcher-operator-controller-manager-7b8f755c7-tlv6g\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.802927 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbbx5\" (UniqueName: \"kubernetes.io/projected/03478ac9-bd6b-4726-86b4-cd29045b6dc0-kube-api-access-lbbx5\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.806273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlvz\" (UniqueName: \"kubernetes.io/projected/a0641fd3-88a7-4fb2-93f9-ffce84aadef2-kube-api-access-ghlvz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dk2dl\" (UID: \"a0641fd3-88a7-4fb2-93f9-ffce84aadef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.830596 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.832505 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ae2b4f_43e9_4a37_abc5_d90e958e540b.slice/crio-79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3 WatchSource:0}: Error finding container 79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3: Status 404 returned error can't find the container with id 79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.860536 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.876931 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.877130 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.877189 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.877172456 +0000 UTC m=+919.041879921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.890862 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm"] Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.926876 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8c5b8d_f13d_48a8_82ff_9928fb5b5b5e.slice/crio-1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3 WatchSource:0}: Error finding container 1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3: Status 404 returned error can't find the container with id 1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.928993 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.930551 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" event={"ID":"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6","Type":"ContainerStarted","Data":"ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.941031 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" event={"ID":"90ae2b4f-43e9-4a37-abc5-d90e958e540b","Type":"ContainerStarted","Data":"79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.958090 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" event={"ID":"c5dd6b1a-1515-4ad6-b89e-0c7253a71281","Type":"ContainerStarted","Data":"fa010a063338fd4ba88d0d5ab493d394b1f1b84f432065519814e0b18329e32d"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.959752 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" event={"ID":"e29f1042-97e4-430c-a262-53ab3cca40d9","Type":"ContainerStarted","Data":"71f098930a94cb3e775748336a363501513777b8facfcf6ad0c544a897977bd2"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.978154 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" event={"ID":"70dc0d96-2ba1-487e-8ffc-a98725e002c4","Type":"ContainerStarted","Data":"6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.024050 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.074495 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.113384 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555394ee_9ad5_417f_9698_646ba1ddc5f2.slice/crio-8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf WatchSource:0}: Error finding container 8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf: Status 404 returned error can't find the container with id 8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.203356 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.218264 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2183e6_a9e4_44b8_861f_9a545aac1c12.slice/crio-b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717 WatchSource:0}: Error finding container b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717: Status 404 returned error can't find the container with id b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717 Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.233606 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.250158 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.285685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.285767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.285853 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.285917 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:35.285898461 +0000 UTC m=+919.450605926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.285970 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.286034 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:35.286016804 +0000 UTC m=+919.450724339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.541267 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd5d672d_1c27_4782_bbf3_c6d936a8c9bb.slice/crio-c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56 WatchSource:0}: Error finding container c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56: Status 404 returned error can't find the container with id c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56 Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.557373 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.557621 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.557634 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.576349 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.577045 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode28ba494_e3ae_4294_8018_e9b8d7a1f96a.slice/crio-8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f WatchSource:0}: Error finding container 8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f: Status 404 returned error can't find the container with id 8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.584444 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.223:5001/openstack-k8s-operators/watcher-operator:09b2d9800e2605016d087ebe1039eab09a5c2745,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7b8f755c7-tlv6g_openstack-operators(e28ba494-e3ae-4294-8018-e9b8d7a1f96a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.585755 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.587135 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.605411 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.690989 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.691313 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.691420 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:36.691392156 +0000 UTC m=+920.856099621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.711917 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh"] Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.722952 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jcjzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh_openstack-operators(0d39c5fc-e526-46e8-8773-6bf87e938b06): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.724166 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podUID="0d39c5fc-e526-46e8-8773-6bf87e938b06" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.733316 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.739402 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.743813 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b364747_4f4c_4431_becf_0f2b30bc9d20.slice/crio-2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc WatchSource:0}: Error finding container 2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc: Status 404 returned error can't find the container with id 2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.745959 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0641fd3_88a7_4fb2_93f9_ffce84aadef2.slice/crio-9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50 WatchSource:0}: Error finding container 9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50: Status 404 returned error can't find the container with id 9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50 Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.751276 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghlvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dk2dl_openstack-operators(a0641fd3-88a7-4fb2-93f9-ffce84aadef2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.752417 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podUID="a0641fd3-88a7-4fb2-93f9-ffce84aadef2" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.782452 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.794227 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod931ac40b_6695_41c7_9d8f_c8eefca6e587.slice/crio-457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd WatchSource:0}: Error finding container 457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd: Status 404 returned error can't find the container with id 457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.797756 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjqh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-5zhml_openstack-operators(931ac40b-6695-41c7-9d8f-c8eefca6e587): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.798932 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podUID="931ac40b-6695-41c7-9d8f-c8eefca6e587" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.896661 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.896792 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.896843 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:36.896829486 +0000 UTC m=+921.061536951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.983821 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerStarted","Data":"8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.984914 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" event={"ID":"4e9b965f-6060-43e7-aa1c-b73472075bae","Type":"ContainerStarted","Data":"98cbfe9f7b14a4d5efeb6d86491ae4a6e71a3235f9e1474331b1c8cdce0a471b"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.985338 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/openstack-k8s-operators/watcher-operator:09b2d9800e2605016d087ebe1039eab09a5c2745\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.986038 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" event={"ID":"fd2183e6-a9e4-44b8-861f-9a545aac1c12","Type":"ContainerStarted","Data":"b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.986900 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" event={"ID":"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e","Type":"ContainerStarted","Data":"1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.988132 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" event={"ID":"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb","Type":"ContainerStarted","Data":"c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.989129 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" event={"ID":"1b364747-4f4c-4431-becf-0f2b30bc9d20","Type":"ContainerStarted","Data":"2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.990041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" event={"ID":"b60b13f0-97c0-42b9-85fd-2a51218c9ac1","Type":"ContainerStarted","Data":"76ffa1ad397a7f4ad1f1a63d13fc1270672ed78b210e725944382eabbd094d43"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.990930 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" event={"ID":"a0641fd3-88a7-4fb2-93f9-ffce84aadef2","Type":"ContainerStarted","Data":"9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.992029 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podUID="a0641fd3-88a7-4fb2-93f9-ffce84aadef2" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.992795 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" event={"ID":"aba99191-8a3a-47dc-8dca-136de682a567","Type":"ContainerStarted","Data":"8daecca1eb253faed5e10103bf43dc1ef1f33d212699a0dd5017e1c0de7f3880"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.993625 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" event={"ID":"03047106-c820-43c2-bee1-c8b1fb3a0a0c","Type":"ContainerStarted","Data":"920b323a81d13117b743c419afa7e9978d132e659c59f9586fd22972010d73a1"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.994362 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" event={"ID":"235cf5b2-2094-4345-bf37-edbcb2e5e48f","Type":"ContainerStarted","Data":"e9b2b1fd3e24002b96b21764e5374029b2a1a2aa608a263c5d3fd60e688c69ba"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.995230 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" event={"ID":"0d39c5fc-e526-46e8-8773-6bf87e938b06","Type":"ContainerStarted","Data":"0fcdbbcdfedca2eedb1606627094da500578bde4e5bb2c8d508748c9a648ea02"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.996025 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podUID="0d39c5fc-e526-46e8-8773-6bf87e938b06" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.996691 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" event={"ID":"931ac40b-6695-41c7-9d8f-c8eefca6e587","Type":"ContainerStarted","Data":"457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.997653 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podUID="931ac40b-6695-41c7-9d8f-c8eefca6e587" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.998154 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" event={"ID":"555394ee-9ad5-417f-9698-646ba1ddc5f2","Type":"ContainerStarted","Data":"8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.999477 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" event={"ID":"ce22ba19-581c-4f75-9bd6-4de0538779a2","Type":"ContainerStarted","Data":"e782ccd6cff54a1a4868ff3e0b7aa7c52dc1815d6b8d0ca9c549c490a24d3e92"} Jan 26 23:23:35 crc kubenswrapper[4995]: I0126 23:23:35.188500 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:35 crc kubenswrapper[4995]: I0126 23:23:35.306641 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:35 crc kubenswrapper[4995]: I0126 23:23:35.306734 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.306820 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.306917 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:37.306893393 +0000 UTC m=+921.471600928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.306959 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.307052 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:37.307029677 +0000 UTC m=+921.471737202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026457 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podUID="0d39c5fc-e526-46e8-8773-6bf87e938b06" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026476 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podUID="a0641fd3-88a7-4fb2-93f9-ffce84aadef2" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026526 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/openstack-k8s-operators/watcher-operator:09b2d9800e2605016d087ebe1039eab09a5c2745\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026654 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podUID="931ac40b-6695-41c7-9d8f-c8eefca6e587" Jan 26 23:23:36 crc kubenswrapper[4995]: I0126 23:23:36.739494 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.739947 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.739987 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:40.739973436 +0000 UTC m=+924.904680901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: I0126 23:23:36.944449 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.944895 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.944972 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:40.944952484 +0000 UTC m=+925.109659939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.028177 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7j9zc" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" containerID="cri-o://acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" gracePeriod=2 Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.352651 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.352706 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352843 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352923 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:41.35290333 +0000 UTC m=+925.517610795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352855 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352992 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:41.352977931 +0000 UTC m=+925.517685396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.466319 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.555704 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.555834 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.555880 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.557523 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities" (OuterVolumeSpecName: "utilities") pod "fb8f3318-4432-4877-9e0c-1ae39d3a849e" (UID: "fb8f3318-4432-4877-9e0c-1ae39d3a849e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.562224 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp" (OuterVolumeSpecName: "kube-api-access-nvclp") pod "fb8f3318-4432-4877-9e0c-1ae39d3a849e" (UID: "fb8f3318-4432-4877-9e0c-1ae39d3a849e"). InnerVolumeSpecName "kube-api-access-nvclp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.606730 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb8f3318-4432-4877-9e0c-1ae39d3a849e" (UID: "fb8f3318-4432-4877-9e0c-1ae39d3a849e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.657971 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.658004 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.658014 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043528 4995 generic.go:334] "Generic (PLEG): container finished" podID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" exitCode=0 Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca"} Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"04da271e4f7505c2ffd196e4561fda52d5add96fbb0f643f634bb2bd36cc7757"} Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043619 4995 scope.go:117] "RemoveContainer" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043629 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.076871 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.082634 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.528750 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" path="/var/lib/kubelet/pods/fb8f3318-4432-4877-9e0c-1ae39d3a849e/volumes" Jan 26 23:23:40 crc kubenswrapper[4995]: I0126 23:23:40.743652 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.743854 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.743985 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:48.743941918 +0000 UTC m=+932.908649383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:40 crc kubenswrapper[4995]: I0126 23:23:40.947264 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.947456 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.947500 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:48.94748761 +0000 UTC m=+933.112195075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: I0126 23:23:41.454218 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:41 crc kubenswrapper[4995]: I0126 23:23:41.454578 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454457 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454833 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:49.454819567 +0000 UTC m=+933.619527032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454683 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454994 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:49.454952741 +0000 UTC m=+933.619660246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.134015 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.134839 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xm6cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-p47jp_openstack-operators(03047106-c820-43c2-bee1-c8b1fb3a0a0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.136092 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" podUID="03047106-c820-43c2-bee1-c8b1fb3a0a0c" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.760865 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.761185 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whrdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-77554cdc5c-kgv2f_openstack-operators(90ae2b4f-43e9-4a37-abc5-d90e958e540b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.762975 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" podUID="90ae2b4f-43e9-4a37-abc5-d90e958e540b" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.763284 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.779632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.851057 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.967986 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.968173 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.968271 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:24:04.968245726 +0000 UTC m=+949.132953191 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.137527 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" podUID="03047106-c820-43c2-bee1-c8b1fb3a0a0c" Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.139145 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" podUID="90ae2b4f-43e9-4a37-abc5-d90e958e540b" Jan 26 23:23:49 crc kubenswrapper[4995]: I0126 23:23:49.474773 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:49 crc kubenswrapper[4995]: I0126 23:23:49.474905 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475181 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475262 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:24:05.475235414 +0000 UTC m=+949.639942919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475436 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475488 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:24:05.47547151 +0000 UTC m=+949.640179015 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:54 crc kubenswrapper[4995]: I0126 23:23:54.496818 4995 scope.go:117] "RemoveContainer" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" Jan 26 23:23:54 crc kubenswrapper[4995]: E0126 23:23:54.766274 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 23:23:54 crc kubenswrapper[4995]: E0126 23:23:54.766902 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mrhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-z899w_openstack-operators(1b364747-4f4c-4431-becf-0f2b30bc9d20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:54 crc kubenswrapper[4995]: E0126 23:23:54.768288 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" podUID="1b364747-4f4c-4431-becf-0f2b30bc9d20" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.686899 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" podUID="1b364747-4f4c-4431-becf-0f2b30bc9d20" Jan 26 23:23:55 crc kubenswrapper[4995]: I0126 23:23:55.744647 4995 scope.go:117] "RemoveContainer" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.744878 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.745170 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j98vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f54b7d6d4-cf7gh_openstack-operators(4e9b965f-6060-43e7-aa1c-b73472075bae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.746518 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" podUID="4e9b965f-6060-43e7-aa1c-b73472075bae" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.197278 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" podUID="4e9b965f-6060-43e7-aa1c-b73472075bae" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.776202 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.776591 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcldk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-gzjxj_openstack-operators(235cf5b2-2094-4345-bf37-edbcb2e5e48f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.777784 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" podUID="235cf5b2-2094-4345-bf37-edbcb2e5e48f" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.201617 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" podUID="235cf5b2-2094-4345-bf37-edbcb2e5e48f" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.297599 4995 scope.go:117] "RemoveContainer" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.298208 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca\": container with ID starting with acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca not found: ID does not exist" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298258 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca"} err="failed to get container status \"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca\": rpc error: code = NotFound desc = could not find container \"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca\": container with ID starting with acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca not found: ID does not exist" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298286 4995 scope.go:117] "RemoveContainer" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.298631 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492\": container with ID starting with b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492 not found: ID does not exist" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298678 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492"} err="failed to get container status \"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492\": rpc error: code = NotFound desc = could not find container \"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492\": container with ID starting with b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492 not found: ID does not exist" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298693 4995 scope.go:117] "RemoveContainer" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.299078 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b\": container with ID starting with 34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b not found: ID does not exist" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.299187 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b"} err="failed to get container status \"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b\": rpc error: code = NotFound desc = could not find container \"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b\": container with ID starting with 34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b not found: ID does not exist" Jan 26 23:23:59 crc kubenswrapper[4995]: I0126 23:23:59.217339 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8"] Jan 26 23:23:59 crc kubenswrapper[4995]: W0126 23:23:59.295273 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2f8d86_155b_476b_86c4_fda3eb595fc9.slice/crio-34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a WatchSource:0}: Error finding container 34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a: Status 404 returned error can't find the container with id 34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a Jan 26 23:23:59 crc kubenswrapper[4995]: I0126 23:23:59.318065 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.227244 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" event={"ID":"fd2183e6-a9e4-44b8-861f-9a545aac1c12","Type":"ContainerStarted","Data":"425dbb2c3ff281b3978c8bf23ba3e5786c56e311bf3329529153c59e822e731b"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.227602 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.231839 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" event={"ID":"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e","Type":"ContainerStarted","Data":"71be7ba21ce684c877534dcea5ef377ec267934f459259af4c6bf20c0397963a"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.232427 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.238482 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" event={"ID":"a0641fd3-88a7-4fb2-93f9-ffce84aadef2","Type":"ContainerStarted","Data":"cb067666a02842c5d4943828660a0a26b3d91f9b5821ff29271d5897b9b01921"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.241749 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" event={"ID":"931ac40b-6695-41c7-9d8f-c8eefca6e587","Type":"ContainerStarted","Data":"fdcc4ccbdebd8ac253aa344175a4763b27dcb627ac664aa3c8f8028958bb1125"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.242350 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.244668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" event={"ID":"aba99191-8a3a-47dc-8dca-136de682a567","Type":"ContainerStarted","Data":"1c5c7c5db459d927b01a20361b3ee86a1d6b224ce124485c7ab9509c06d62b8e"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.244717 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.245924 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" event={"ID":"3a2f8d86-155b-476b-86c4-fda3eb595fc9","Type":"ContainerStarted","Data":"34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.252619 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" podStartSLOduration=4.254682458 podStartE2EDuration="28.252600938s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.225337949 +0000 UTC m=+918.390045414" lastFinishedPulling="2026-01-26 23:23:58.223256419 +0000 UTC m=+942.387963894" observedRunningTime="2026-01-26 23:24:00.249866159 +0000 UTC m=+944.414573624" watchObservedRunningTime="2026-01-26 23:24:00.252600938 +0000 UTC m=+944.417308403" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.253012 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" event={"ID":"e29f1042-97e4-430c-a262-53ab3cca40d9","Type":"ContainerStarted","Data":"1d759865728d2d43489a6b8aa2a9ac9172e08cf025c770244e92e89555fd5b42"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.253774 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.265631 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" event={"ID":"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb","Type":"ContainerStarted","Data":"aa4bf00a0d56dd4eab33900595ef5aeec8fbf6242964bb00d63e0e7cbb2562ea"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.267207 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.268589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" event={"ID":"ce22ba19-581c-4f75-9bd6-4de0538779a2","Type":"ContainerStarted","Data":"72e2686d39e2a1ece6d8f29eaca9ebf8dae5b716d153699fd40c773f18e2d2a7"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.269240 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.276540 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" event={"ID":"b60b13f0-97c0-42b9-85fd-2a51218c9ac1","Type":"ContainerStarted","Data":"9a66e2284757cf521b97d7874fc95404fd3fee234318aeedcdb072ee42001524"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.277235 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.286430 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" event={"ID":"70dc0d96-2ba1-487e-8ffc-a98725e002c4","Type":"ContainerStarted","Data":"efcba479dd7725c611825d529a745734bc83a8c6325625e9a2bddb1efeedf2d2"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.287269 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.288664 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podStartSLOduration=2.634415452 podStartE2EDuration="27.288645868s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.751136788 +0000 UTC m=+918.915844253" lastFinishedPulling="2026-01-26 23:23:59.405367194 +0000 UTC m=+943.570074669" observedRunningTime="2026-01-26 23:24:00.283445038 +0000 UTC m=+944.448152573" watchObservedRunningTime="2026-01-26 23:24:00.288645868 +0000 UTC m=+944.453353333" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.297806 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" event={"ID":"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6","Type":"ContainerStarted","Data":"081a100c455622cc35271500428dc1e1afec1f13946dc0158d093f410b942e32"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.299067 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.306345 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" event={"ID":"c5dd6b1a-1515-4ad6-b89e-0c7253a71281","Type":"ContainerStarted","Data":"96f5f7e5688b748f0863bcf6fb5e65e092c9d02536e07f285aaa4b712c9b5f1e"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.306414 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.312800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerStarted","Data":"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.313568 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.313578 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" podStartSLOduration=6.533772363 podStartE2EDuration="28.31355794s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.949183214 +0000 UTC m=+918.113890679" lastFinishedPulling="2026-01-26 23:23:55.728968751 +0000 UTC m=+939.893676256" observedRunningTime="2026-01-26 23:24:00.31035306 +0000 UTC m=+944.475060525" watchObservedRunningTime="2026-01-26 23:24:00.31355794 +0000 UTC m=+944.478265425" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.351386 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" event={"ID":"555394ee-9ad5-417f-9698-646ba1ddc5f2","Type":"ContainerStarted","Data":"d8f2d6dd22673cc2e070ae43e6acd84d5ab84a281993d9771e53d72dfced9a48"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.351519 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.370662 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" event={"ID":"0d39c5fc-e526-46e8-8773-6bf87e938b06","Type":"ContainerStarted","Data":"903e197ff8ff455daea89f3cda21bd1e7b2437e42ac288cd994ef09b1ddc84a8"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.371716 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.393031 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" podStartSLOduration=7.207647078 podStartE2EDuration="28.393006893s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.559033331 +0000 UTC m=+918.723740806" lastFinishedPulling="2026-01-26 23:23:55.744393156 +0000 UTC m=+939.909100621" observedRunningTime="2026-01-26 23:24:00.376305426 +0000 UTC m=+944.541012881" watchObservedRunningTime="2026-01-26 23:24:00.393006893 +0000 UTC m=+944.557714358" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.445786 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podStartSLOduration=4.474884486 podStartE2EDuration="28.445771631s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.797650189 +0000 UTC m=+918.962357654" lastFinishedPulling="2026-01-26 23:23:58.768537314 +0000 UTC m=+942.933244799" observedRunningTime="2026-01-26 23:24:00.417362652 +0000 UTC m=+944.582070117" watchObservedRunningTime="2026-01-26 23:24:00.445771631 +0000 UTC m=+944.610479096" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.446495 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podStartSLOduration=2.657457457 podStartE2EDuration="27.446491349s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.584314253 +0000 UTC m=+918.749021718" lastFinishedPulling="2026-01-26 23:23:59.373348125 +0000 UTC m=+943.538055610" observedRunningTime="2026-01-26 23:24:00.439072244 +0000 UTC m=+944.603779709" watchObservedRunningTime="2026-01-26 23:24:00.446491349 +0000 UTC m=+944.611198814" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.463624 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" podStartSLOduration=3.783283028 podStartE2EDuration="28.463607956s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.542935601 +0000 UTC m=+917.707643066" lastFinishedPulling="2026-01-26 23:23:58.223260519 +0000 UTC m=+942.387967994" observedRunningTime="2026-01-26 23:24:00.461134085 +0000 UTC m=+944.625841550" watchObservedRunningTime="2026-01-26 23:24:00.463607956 +0000 UTC m=+944.628315421" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.488278 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" podStartSLOduration=5.554220746 podStartE2EDuration="28.488263982s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.864009247 +0000 UTC m=+918.028716712" lastFinishedPulling="2026-01-26 23:23:56.798052483 +0000 UTC m=+940.962759948" observedRunningTime="2026-01-26 23:24:00.487497673 +0000 UTC m=+944.652205138" watchObservedRunningTime="2026-01-26 23:24:00.488263982 +0000 UTC m=+944.652971437" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.550803 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" podStartSLOduration=5.281424324 podStartE2EDuration="27.550783243s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.544990181 +0000 UTC m=+918.709697646" lastFinishedPulling="2026-01-26 23:23:56.8143491 +0000 UTC m=+940.979056565" observedRunningTime="2026-01-26 23:24:00.520501197 +0000 UTC m=+944.685208672" watchObservedRunningTime="2026-01-26 23:24:00.550783243 +0000 UTC m=+944.715490708" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.551268 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" podStartSLOduration=6.645152225 podStartE2EDuration="28.551263395s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.8228555 +0000 UTC m=+917.987562965" lastFinishedPulling="2026-01-26 23:23:55.72896663 +0000 UTC m=+939.893674135" observedRunningTime="2026-01-26 23:24:00.54947934 +0000 UTC m=+944.714186805" watchObservedRunningTime="2026-01-26 23:24:00.551263395 +0000 UTC m=+944.715970850" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.569518 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podStartSLOduration=3.960331699 podStartE2EDuration="28.56949698s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.722731029 +0000 UTC m=+918.887438494" lastFinishedPulling="2026-01-26 23:23:59.33189631 +0000 UTC m=+943.496603775" observedRunningTime="2026-01-26 23:24:00.569036799 +0000 UTC m=+944.733744264" watchObservedRunningTime="2026-01-26 23:24:00.56949698 +0000 UTC m=+944.734204435" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.593679 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" podStartSLOduration=5.393549043 podStartE2EDuration="27.593655943s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.551598506 +0000 UTC m=+918.716305991" lastFinishedPulling="2026-01-26 23:23:56.751705416 +0000 UTC m=+940.916412891" observedRunningTime="2026-01-26 23:24:00.592195437 +0000 UTC m=+944.756902902" watchObservedRunningTime="2026-01-26 23:24:00.593655943 +0000 UTC m=+944.758363408" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.617449 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" podStartSLOduration=5.896719138 podStartE2EDuration="28.617434277s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.552969 +0000 UTC m=+918.717676485" lastFinishedPulling="2026-01-26 23:23:57.273684159 +0000 UTC m=+941.438391624" observedRunningTime="2026-01-26 23:24:00.61394886 +0000 UTC m=+944.778656325" watchObservedRunningTime="2026-01-26 23:24:00.617434277 +0000 UTC m=+944.782141742" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.648491 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" podStartSLOduration=4.056758377 podStartE2EDuration="28.648469392s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.631572054 +0000 UTC m=+917.796279519" lastFinishedPulling="2026-01-26 23:23:58.223283029 +0000 UTC m=+942.387990534" observedRunningTime="2026-01-26 23:24:00.642456582 +0000 UTC m=+944.807164057" watchObservedRunningTime="2026-01-26 23:24:00.648469392 +0000 UTC m=+944.813176857" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.658911 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" podStartSLOduration=6.026679813 podStartE2EDuration="28.658889892s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.119471486 +0000 UTC m=+918.284178951" lastFinishedPulling="2026-01-26 23:23:56.751681575 +0000 UTC m=+940.916389030" observedRunningTime="2026-01-26 23:24:00.654874942 +0000 UTC m=+944.819582417" watchObservedRunningTime="2026-01-26 23:24:00.658889892 +0000 UTC m=+944.823597357" Jan 26 23:24:02 crc kubenswrapper[4995]: I0126 23:24:02.407359 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" event={"ID":"3a2f8d86-155b-476b-86c4-fda3eb595fc9","Type":"ContainerStarted","Data":"f41c23d61ae1f0fec8db535bfb06f831fec0d579f1223b081dc6ef72a37caf74"} Jan 26 23:24:02 crc kubenswrapper[4995]: I0126 23:24:02.407753 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:24:02 crc kubenswrapper[4995]: I0126 23:24:02.429876 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" podStartSLOduration=27.550242322 podStartE2EDuration="30.429852411s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:59.317846079 +0000 UTC m=+943.482553534" lastFinishedPulling="2026-01-26 23:24:02.197456138 +0000 UTC m=+946.362163623" observedRunningTime="2026-01-26 23:24:02.428397044 +0000 UTC m=+946.593104529" watchObservedRunningTime="2026-01-26 23:24:02.429852411 +0000 UTC m=+946.594559886" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.416232 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" event={"ID":"90ae2b4f-43e9-4a37-abc5-d90e958e540b","Type":"ContainerStarted","Data":"9f75de979e8debd0f0408e92f6a297458aaa5b5f5265f358f0c3397dd841b7d0"} Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.416694 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.421093 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" event={"ID":"03047106-c820-43c2-bee1-c8b1fb3a0a0c","Type":"ContainerStarted","Data":"85b5dcfecfe8e67bc59258e4b14441561267b335d9b5e6a7cc95340b245dc49f"} Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.422006 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.440597 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" podStartSLOduration=2.294670231 podStartE2EDuration="31.440574457s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.863614088 +0000 UTC m=+918.028321553" lastFinishedPulling="2026-01-26 23:24:03.009518284 +0000 UTC m=+947.174225779" observedRunningTime="2026-01-26 23:24:03.435943392 +0000 UTC m=+947.600650867" watchObservedRunningTime="2026-01-26 23:24:03.440574457 +0000 UTC m=+947.605281942" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.454348 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" podStartSLOduration=2.664648218 podStartE2EDuration="31.454326841s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.266191379 +0000 UTC m=+918.430898844" lastFinishedPulling="2026-01-26 23:24:03.055870002 +0000 UTC m=+947.220577467" observedRunningTime="2026-01-26 23:24:03.452869254 +0000 UTC m=+947.617576719" watchObservedRunningTime="2026-01-26 23:24:03.454326841 +0000 UTC m=+947.619034306" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.015557 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.023160 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.114707 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.523446 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.523509 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.527831 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.529975 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: W0126 23:24:05.616309 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfbd9d32_25ae_4369_8e16_ce174c0802dc.slice/crio-2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9 WatchSource:0}: Error finding container 2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9: Status 404 returned error can't find the container with id 2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9 Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.618360 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5"] Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.689022 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.102340 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w"] Jan 26 23:24:06 crc kubenswrapper[4995]: W0126 23:24:06.114295 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03478ac9_bd6b_4726_86b4_cd29045b6dc0.slice/crio-4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83 WatchSource:0}: Error finding container 4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83: Status 404 returned error can't find the container with id 4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83 Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.448767 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" event={"ID":"03478ac9-bd6b-4726-86b4-cd29045b6dc0","Type":"ContainerStarted","Data":"ea13fe15f31261aea83c9994356212607c23ac772413c7990e6e12e9593a33f5"} Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.448822 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" event={"ID":"03478ac9-bd6b-4726-86b4-cd29045b6dc0","Type":"ContainerStarted","Data":"4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83"} Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.448889 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.456836 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" event={"ID":"cfbd9d32-25ae-4369-8e16-ce174c0802dc","Type":"ContainerStarted","Data":"2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9"} Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.489399 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" podStartSLOduration=33.489380491 podStartE2EDuration="33.489380491s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:24:06.476976592 +0000 UTC m=+950.641684107" watchObservedRunningTime="2026-01-26 23:24:06.489380491 +0000 UTC m=+950.654087966" Jan 26 23:24:07 crc kubenswrapper[4995]: I0126 23:24:07.467434 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" event={"ID":"cfbd9d32-25ae-4369-8e16-ce174c0802dc","Type":"ContainerStarted","Data":"0a07e996bfe8eb6e549bbef3228645b8f548ed7f17e47a454d069d3ad3102d1f"} Jan 26 23:24:07 crc kubenswrapper[4995]: I0126 23:24:07.467822 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:07 crc kubenswrapper[4995]: I0126 23:24:07.503027 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" podStartSLOduration=33.818222973 podStartE2EDuration="35.503005779s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:24:05.618779183 +0000 UTC m=+949.783486658" lastFinishedPulling="2026-01-26 23:24:07.303561999 +0000 UTC m=+951.468269464" observedRunningTime="2026-01-26 23:24:07.496766793 +0000 UTC m=+951.661474298" watchObservedRunningTime="2026-01-26 23:24:07.503005779 +0000 UTC m=+951.667713254" Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.486585 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" event={"ID":"1b364747-4f4c-4431-becf-0f2b30bc9d20","Type":"ContainerStarted","Data":"8795fa5817cae12cba287456f3746c00542fb636d8283b306437be23e5d4b3f2"} Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.487450 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.518634 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" podStartSLOduration=3.076874381 podStartE2EDuration="36.518612147s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.745822215 +0000 UTC m=+918.910529680" lastFinishedPulling="2026-01-26 23:24:08.187559961 +0000 UTC m=+952.352267446" observedRunningTime="2026-01-26 23:24:08.512828013 +0000 UTC m=+952.677535498" watchObservedRunningTime="2026-01-26 23:24:08.518612147 +0000 UTC m=+952.683319632" Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.860157 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:24:11 crc kubenswrapper[4995]: I0126 23:24:11.513607 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" event={"ID":"235cf5b2-2094-4345-bf37-edbcb2e5e48f","Type":"ContainerStarted","Data":"448fc0f7f81217363e8187a42cadcc3c795455cd3496b1f04f53e5e39c9dabf1"} Jan 26 23:24:11 crc kubenswrapper[4995]: I0126 23:24:11.514163 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:24:11 crc kubenswrapper[4995]: I0126 23:24:11.538675 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" podStartSLOduration=2.88497353 podStartE2EDuration="39.538656723s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.267246266 +0000 UTC m=+918.431953731" lastFinishedPulling="2026-01-26 23:24:10.920929459 +0000 UTC m=+955.085636924" observedRunningTime="2026-01-26 23:24:11.534989312 +0000 UTC m=+955.699696817" watchObservedRunningTime="2026-01-26 23:24:11.538656723 +0000 UTC m=+955.703364188" Jan 26 23:24:12 crc kubenswrapper[4995]: I0126 23:24:12.539725 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" event={"ID":"4e9b965f-6060-43e7-aa1c-b73472075bae","Type":"ContainerStarted","Data":"025a8a00710599d955c5d8a5e3ef6c315173a0c80d3c0a044f612e6f1b93a08f"} Jan 26 23:24:12 crc kubenswrapper[4995]: I0126 23:24:12.539938 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:24:12 crc kubenswrapper[4995]: I0126 23:24:12.569269 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" podStartSLOduration=3.127394423 podStartE2EDuration="40.569247696s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.54496506 +0000 UTC m=+918.709672535" lastFinishedPulling="2026-01-26 23:24:11.986818323 +0000 UTC m=+956.151525808" observedRunningTime="2026-01-26 23:24:12.561841941 +0000 UTC m=+956.726549436" watchObservedRunningTime="2026-01-26 23:24:12.569247696 +0000 UTC m=+956.733955171" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.023220 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.041632 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.072951 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.133049 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.218597 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.304147 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.456750 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.499457 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.510979 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.538687 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.581625 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.602083 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.634224 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.666816 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.772536 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.837601 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.863160 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:15 crc kubenswrapper[4995]: I0126 23:24:15.124927 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:15 crc kubenswrapper[4995]: I0126 23:24:15.699490 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:23 crc kubenswrapper[4995]: I0126 23:24:23.478202 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:24:23 crc kubenswrapper[4995]: I0126 23:24:23.560520 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.009308 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.010435 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" containerID="cri-o://385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" gracePeriod=10 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.065378 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.065765 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" containerID="cri-o://6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114" gracePeriod=10 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.459989 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.599555 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.606323 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp" (OuterVolumeSpecName: "kube-api-access-fjxhp") pod "e28ba494-e3ae-4294-8018-e9b8d7a1f96a" (UID: "e28ba494-e3ae-4294-8018-e9b8d7a1f96a"). InnerVolumeSpecName "kube-api-access-fjxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691597 4995 generic.go:334] "Generic (PLEG): container finished" podID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" exitCode=0 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691711 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerDied","Data":"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38"} Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerDied","Data":"8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f"} Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691752 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691782 4995 scope.go:117] "RemoveContainer" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.694037 4995 generic.go:334] "Generic (PLEG): container finished" podID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerID="6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114" exitCode=0 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.694082 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerDied","Data":"6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114"} Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.701143 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.728443 4995 scope.go:117] "RemoveContainer" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" Jan 26 23:24:28 crc kubenswrapper[4995]: E0126 23:24:28.733031 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38\": container with ID starting with 385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38 not found: ID does not exist" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.733351 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38"} err="failed to get container status \"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38\": rpc error: code = NotFound desc = could not find container \"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38\": container with ID starting with 385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38 not found: ID does not exist" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.736811 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.747166 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.959959 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.004583 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"892f33f6-3409-407d-b85b-922b8bdbfa16\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.010293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22" (OuterVolumeSpecName: "kube-api-access-f4k22") pod "892f33f6-3409-407d-b85b-922b8bdbfa16" (UID: "892f33f6-3409-407d-b85b-922b8bdbfa16"). InnerVolumeSpecName "kube-api-access-f4k22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.106464 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.701355 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerDied","Data":"c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0"} Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.701661 4995 scope.go:117] "RemoveContainer" containerID="6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.701388 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.971318 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.982275 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:24:30 crc kubenswrapper[4995]: I0126 23:24:30.525037 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" path="/var/lib/kubelet/pods/892f33f6-3409-407d-b85b-922b8bdbfa16/volumes" Jan 26 23:24:30 crc kubenswrapper[4995]: I0126 23:24:30.525515 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" path="/var/lib/kubelet/pods/e28ba494-e3ae-4294-8018-e9b8d7a1f96a/volumes" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.118651 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119459 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119483 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119515 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119529 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119544 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119558 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119588 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-content" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119601 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-content" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119628 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-utilities" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119641 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-utilities" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119923 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119947 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119978 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.120807 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.125306 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-rl7b8" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.136753 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.246095 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"watcher-operator-index-6vpbz\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.347780 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"watcher-operator-index-6vpbz\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.401899 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"watcher-operator-index-6vpbz\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.445017 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:33 crc kubenswrapper[4995]: I0126 23:24:33.187659 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:33 crc kubenswrapper[4995]: W0126 23:24:33.203295 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c97a9e_e3f1_441b_b4f8_6e15bfb926e0.slice/crio-9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65 WatchSource:0}: Error finding container 9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65: Status 404 returned error can't find the container with id 9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65 Jan 26 23:24:33 crc kubenswrapper[4995]: I0126 23:24:33.729838 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerStarted","Data":"9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65"} Jan 26 23:24:34 crc kubenswrapper[4995]: I0126 23:24:34.741727 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerStarted","Data":"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06"} Jan 26 23:24:34 crc kubenswrapper[4995]: I0126 23:24:34.768687 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-6vpbz" podStartSLOduration=1.734916026 podStartE2EDuration="2.768652805s" podCreationTimestamp="2026-01-26 23:24:32 +0000 UTC" firstStartedPulling="2026-01-26 23:24:33.204873422 +0000 UTC m=+977.369580887" lastFinishedPulling="2026-01-26 23:24:34.238610211 +0000 UTC m=+978.403317666" observedRunningTime="2026-01-26 23:24:34.759434465 +0000 UTC m=+978.924141980" watchObservedRunningTime="2026-01-26 23:24:34.768652805 +0000 UTC m=+978.933360310" Jan 26 23:24:35 crc kubenswrapper[4995]: I0126 23:24:35.707682 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.312119 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-k8w76"] Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.313460 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.323765 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-k8w76"] Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.423660 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mck\" (UniqueName: \"kubernetes.io/projected/fea9da97-72c6-4b3a-a479-1566d93b3a22-kube-api-access-q6mck\") pod \"watcher-operator-index-k8w76\" (UID: \"fea9da97-72c6-4b3a-a479-1566d93b3a22\") " pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.524555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mck\" (UniqueName: \"kubernetes.io/projected/fea9da97-72c6-4b3a-a479-1566d93b3a22-kube-api-access-q6mck\") pod \"watcher-operator-index-k8w76\" (UID: \"fea9da97-72c6-4b3a-a479-1566d93b3a22\") " pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.569760 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mck\" (UniqueName: \"kubernetes.io/projected/fea9da97-72c6-4b3a-a479-1566d93b3a22-kube-api-access-q6mck\") pod \"watcher-operator-index-k8w76\" (UID: \"fea9da97-72c6-4b3a-a479-1566d93b3a22\") " pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.680887 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.764239 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-6vpbz" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" containerID="cri-o://3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" gracePeriod=2 Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.974350 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-k8w76"] Jan 26 23:24:36 crc kubenswrapper[4995]: W0126 23:24:36.975017 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea9da97_72c6_4b3a_a479_1566d93b3a22.slice/crio-d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e WatchSource:0}: Error finding container d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e: Status 404 returned error can't find the container with id d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.414209 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.576715 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.583601 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc" (OuterVolumeSpecName: "kube-api-access-kxplc") pod "a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" (UID: "a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0"). InnerVolumeSpecName "kube-api-access-kxplc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.680497 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.776568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-k8w76" event={"ID":"fea9da97-72c6-4b3a-a479-1566d93b3a22","Type":"ContainerStarted","Data":"2597e92cbdadb7e020ded468fe9a531871ff88ba827a9710cfa848692d8bee48"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.776622 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-k8w76" event={"ID":"fea9da97-72c6-4b3a-a479-1566d93b3a22","Type":"ContainerStarted","Data":"d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.782978 4995 generic.go:334] "Generic (PLEG): container finished" podID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" exitCode=0 Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783040 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783045 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerDied","Data":"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783138 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerDied","Data":"9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783172 4995 scope.go:117] "RemoveContainer" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.812714 4995 scope.go:117] "RemoveContainer" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" Jan 26 23:24:37 crc kubenswrapper[4995]: E0126 23:24:37.813748 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06\": container with ID starting with 3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06 not found: ID does not exist" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.813838 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06"} err="failed to get container status \"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06\": rpc error: code = NotFound desc = could not find container \"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06\": container with ID starting with 3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06 not found: ID does not exist" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.815626 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-k8w76" podStartSLOduration=1.316272131 podStartE2EDuration="1.815616228s" podCreationTimestamp="2026-01-26 23:24:36 +0000 UTC" firstStartedPulling="2026-01-26 23:24:36.979037821 +0000 UTC m=+981.143745286" lastFinishedPulling="2026-01-26 23:24:37.478381918 +0000 UTC m=+981.643089383" observedRunningTime="2026-01-26 23:24:37.799989258 +0000 UTC m=+981.964696743" watchObservedRunningTime="2026-01-26 23:24:37.815616228 +0000 UTC m=+981.980323693" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.819809 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.824930 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:38 crc kubenswrapper[4995]: I0126 23:24:38.545036 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" path="/var/lib/kubelet/pods/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0/volumes" Jan 26 23:24:40 crc kubenswrapper[4995]: I0126 23:24:40.893231 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:24:40 crc kubenswrapper[4995]: I0126 23:24:40.893681 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.682140 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.683353 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.721557 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.881672 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.758399 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn"] Jan 26 23:24:49 crc kubenswrapper[4995]: E0126 23:24:49.758968 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.758982 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.759183 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.760134 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.764520 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jtm6l" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.777681 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn"] Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.865259 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.865340 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.865488 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.966543 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.966703 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.966745 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.967636 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.967802 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.992951 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:50 crc kubenswrapper[4995]: I0126 23:24:50.130157 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:50 crc kubenswrapper[4995]: I0126 23:24:50.418147 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn"] Jan 26 23:24:50 crc kubenswrapper[4995]: W0126 23:24:50.762334 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c23b438_d384_46e6_8c88_6703c70fccea.slice/crio-4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830 WatchSource:0}: Error finding container 4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830: Status 404 returned error can't find the container with id 4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830 Jan 26 23:24:50 crc kubenswrapper[4995]: I0126 23:24:50.888334 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerStarted","Data":"4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830"} Jan 26 23:24:51 crc kubenswrapper[4995]: I0126 23:24:51.898053 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c23b438-d384-46e6-8c88-6703c70fccea" containerID="8f24a68134ac2281dde9b35e9f503389a667afb82ea81b48563498762a961994" exitCode=0 Jan 26 23:24:51 crc kubenswrapper[4995]: I0126 23:24:51.898179 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"8f24a68134ac2281dde9b35e9f503389a667afb82ea81b48563498762a961994"} Jan 26 23:24:52 crc kubenswrapper[4995]: I0126 23:24:52.911324 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c23b438-d384-46e6-8c88-6703c70fccea" containerID="defe6506de0e57309828b36b87e983c4ec156df4d4956d49d988713262c93c71" exitCode=0 Jan 26 23:24:52 crc kubenswrapper[4995]: I0126 23:24:52.911432 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"defe6506de0e57309828b36b87e983c4ec156df4d4956d49d988713262c93c71"} Jan 26 23:24:53 crc kubenswrapper[4995]: I0126 23:24:53.927520 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c23b438-d384-46e6-8c88-6703c70fccea" containerID="47b8ed82eaf50103c1712153fb983eadad78e1f7233544e73be686e857f0dfaa" exitCode=0 Jan 26 23:24:53 crc kubenswrapper[4995]: I0126 23:24:53.927636 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"47b8ed82eaf50103c1712153fb983eadad78e1f7233544e73be686e857f0dfaa"} Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.261146 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.342825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"5c23b438-d384-46e6-8c88-6703c70fccea\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.342928 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"5c23b438-d384-46e6-8c88-6703c70fccea\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.343006 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"5c23b438-d384-46e6-8c88-6703c70fccea\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.344802 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle" (OuterVolumeSpecName: "bundle") pod "5c23b438-d384-46e6-8c88-6703c70fccea" (UID: "5c23b438-d384-46e6-8c88-6703c70fccea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.360147 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k" (OuterVolumeSpecName: "kube-api-access-xjb9k") pod "5c23b438-d384-46e6-8c88-6703c70fccea" (UID: "5c23b438-d384-46e6-8c88-6703c70fccea"). InnerVolumeSpecName "kube-api-access-xjb9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.364090 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util" (OuterVolumeSpecName: "util") pod "5c23b438-d384-46e6-8c88-6703c70fccea" (UID: "5c23b438-d384-46e6-8c88-6703c70fccea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.445613 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.445673 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.445694 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.949189 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830"} Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.949247 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.949304 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.248612 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:01 crc kubenswrapper[4995]: E0126 23:25:01.249119 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="pull" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249131 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="pull" Jan 26 23:25:01 crc kubenswrapper[4995]: E0126 23:25:01.249148 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="extract" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249154 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="extract" Jan 26 23:25:01 crc kubenswrapper[4995]: E0126 23:25:01.249166 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="util" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249171 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="util" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249297 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="extract" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249715 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.251273 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.251437 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fw4c6" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.264645 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.340031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.340092 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.340231 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.441840 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.441937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.441993 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.451138 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.458772 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.465798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.568483 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.775921 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.776997 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.784016 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.840926 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.855021 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-webhook-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.855217 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9cws\" (UniqueName: \"kubernetes.io/projected/001f4541-5731-4423-9cf7-f2c339b975b1-kube-api-access-j9cws\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.855348 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-apiservice-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.956703 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9cws\" (UniqueName: \"kubernetes.io/projected/001f4541-5731-4423-9cf7-f2c339b975b1-kube-api-access-j9cws\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.956783 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-apiservice-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.956825 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-webhook-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.961826 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-webhook-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.963758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-apiservice-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.987371 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9cws\" (UniqueName: \"kubernetes.io/projected/001f4541-5731-4423-9cf7-f2c339b975b1-kube-api-access-j9cws\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:02 crc kubenswrapper[4995]: I0126 23:25:02.005233 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerStarted","Data":"9fb2cf74c6bd172d22c4db04ecf00ee3e66d50a61ad6ab006b596012477e9423"} Jan 26 23:25:02 crc kubenswrapper[4995]: I0126 23:25:02.106974 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:02 crc kubenswrapper[4995]: I0126 23:25:02.616615 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll"] Jan 26 23:25:02 crc kubenswrapper[4995]: W0126 23:25:02.636151 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod001f4541_5731_4423_9cf7_f2c339b975b1.slice/crio-a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b WatchSource:0}: Error finding container a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b: Status 404 returned error can't find the container with id a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.013133 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerStarted","Data":"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa"} Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.013598 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.016476 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" event={"ID":"001f4541-5731-4423-9cf7-f2c339b975b1","Type":"ContainerStarted","Data":"f261cde3f1f6fc54e192c076848912f28f7f301be79adb6fff5a64364694abb7"} Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.016528 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" event={"ID":"001f4541-5731-4423-9cf7-f2c339b975b1","Type":"ContainerStarted","Data":"a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b"} Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.016660 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.042038 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" podStartSLOduration=2.042018352 podStartE2EDuration="2.042018352s" podCreationTimestamp="2026-01-26 23:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:25:03.037210662 +0000 UTC m=+1007.201918127" watchObservedRunningTime="2026-01-26 23:25:03.042018352 +0000 UTC m=+1007.206725807" Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.056952 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" podStartSLOduration=2.056931774 podStartE2EDuration="2.056931774s" podCreationTimestamp="2026-01-26 23:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:25:03.051904359 +0000 UTC m=+1007.216611824" watchObservedRunningTime="2026-01-26 23:25:03.056931774 +0000 UTC m=+1007.221639239" Jan 26 23:25:10 crc kubenswrapper[4995]: I0126 23:25:10.894196 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:25:10 crc kubenswrapper[4995]: I0126 23:25:10.894943 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:25:11 crc kubenswrapper[4995]: I0126 23:25:11.574732 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.111738 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.168044 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.174421 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" containerID="cri-o://3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" gracePeriod=10 Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.585290 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.760204 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"5c59a309-5169-4591-9059-414f361ef107\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.760375 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"5c59a309-5169-4591-9059-414f361ef107\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.760432 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"5c59a309-5169-4591-9059-414f361ef107\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.765134 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5c59a309-5169-4591-9059-414f361ef107" (UID: "5c59a309-5169-4591-9059-414f361ef107"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.765376 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr" (OuterVolumeSpecName: "kube-api-access-bwdgr") pod "5c59a309-5169-4591-9059-414f361ef107" (UID: "5c59a309-5169-4591-9059-414f361ef107"). InnerVolumeSpecName "kube-api-access-bwdgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.766759 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "5c59a309-5169-4591-9059-414f361ef107" (UID: "5c59a309-5169-4591-9059-414f361ef107"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.862954 4995 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.863010 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") on node \"crc\" DevicePath \"\"" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.863026 4995 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.097991 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c59a309-5169-4591-9059-414f361ef107" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" exitCode=0 Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098047 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerDied","Data":"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa"} Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098134 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerDied","Data":"9fb2cf74c6bd172d22c4db04ecf00ee3e66d50a61ad6ab006b596012477e9423"} Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098161 4995 scope.go:117] "RemoveContainer" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098068 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.132149 4995 scope.go:117] "RemoveContainer" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" Jan 26 23:25:13 crc kubenswrapper[4995]: E0126 23:25:13.143509 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa\": container with ID starting with 3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa not found: ID does not exist" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.143623 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa"} err="failed to get container status \"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa\": rpc error: code = NotFound desc = could not find container \"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa\": container with ID starting with 3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa not found: ID does not exist" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.143697 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.152265 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:14 crc kubenswrapper[4995]: I0126 23:25:14.532590 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c59a309-5169-4591-9059-414f361ef107" path="/var/lib/kubelet/pods/5c59a309-5169-4591-9059-414f361ef107/volumes" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.642944 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: E0126 23:25:24.643788 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.643805 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.643967 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.644679 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.646902 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.647502 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650624 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650977 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650991 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650984 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.651746 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.651869 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.652039 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-pqnf2" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.668487 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.735571 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736027 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736294 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736526 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736729 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2rx\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-kube-api-access-hk2rx\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736970 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54ccebac-5075-4c00-a1e9-ebb66b43876e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737199 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737379 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737597 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737846 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54ccebac-5075-4c00-a1e9-ebb66b43876e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.738066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839229 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839356 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839402 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2rx\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-kube-api-access-hk2rx\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839450 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54ccebac-5075-4c00-a1e9-ebb66b43876e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839917 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.840015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.840562 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841410 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841568 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841716 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841746 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54ccebac-5075-4c00-a1e9-ebb66b43876e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841808 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841847 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.842181 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.844274 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.846660 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54ccebac-5075-4c00-a1e9-ebb66b43876e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.853227 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.853290 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.854333 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54ccebac-5075-4c00-a1e9-ebb66b43876e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.865315 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.865354 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da5cb5359468e2c97ef0be615b3e6aea7eec4cdd8c24ba9a8c01b3413d40eb52/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.877006 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2rx\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-kube-api-access-hk2rx\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.892461 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.918697 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.920478 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.922339 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924491 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924600 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-mwzsq" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924791 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924872 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924992 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.926292 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.940702 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.969727 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044500 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b909799-2071-4d68-ab55-d29f6e224bf2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044530 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044573 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044612 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044636 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b909799-2071-4d68-ab55-d29f6e224bf2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044651 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044690 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044742 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4ws\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-kube-api-access-6q4ws\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044765 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044903 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146091 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146173 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146200 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146228 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b909799-2071-4d68-ab55-d29f6e224bf2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146244 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146262 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146287 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146317 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b909799-2071-4d68-ab55-d29f6e224bf2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146361 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146394 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146434 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q4ws\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-kube-api-access-6q4ws\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.147154 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.150283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.150612 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.152375 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.153404 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.168343 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.168675 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b909799-2071-4d68-ab55-d29f6e224bf2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.175025 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.175083 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b1a8ae1cced15c0fefbf855ad861c0e73323158eeb7a4fd7929b5650c51db8d/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.182417 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.183864 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b909799-2071-4d68-ab55-d29f6e224bf2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.187649 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q4ws\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-kube-api-access-6q4ws\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.225852 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.247671 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.458502 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 23:25:25 crc kubenswrapper[4995]: W0126 23:25:25.477822 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54ccebac_5075_4c00_a1e9_ebb66b43876e.slice/crio-775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b WatchSource:0}: Error finding container 775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b: Status 404 returned error can't find the container with id 775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.757761 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.080412 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.081584 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.088716 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.089142 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.091672 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.091799 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-ghvl2" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.099776 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.103269 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166616 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166662 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166678 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166697 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflxn\" (UniqueName: \"kubernetes.io/projected/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kube-api-access-hflxn\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166717 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166745 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166780 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166812 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.215518 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerStarted","Data":"dbb80368d8ffcd44825bc5cd37008cb4e27e10851c3195eed5ce2f91495315e9"} Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.216636 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerStarted","Data":"775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b"} Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268197 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268279 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268305 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268319 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268341 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflxn\" (UniqueName: \"kubernetes.io/projected/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kube-api-access-hflxn\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268357 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268386 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268425 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.269449 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.270947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.271431 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.272724 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.274469 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.274499 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e0ff4dc62304ff368840298572b97b47aaecc3ae5fd762b3367d1ed0e52e303f/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.276989 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.289636 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflxn\" (UniqueName: \"kubernetes.io/projected/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kube-api-access-hflxn\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.292066 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.328601 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.382374 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.385249 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.397705 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.402432 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.405668 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.405744 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-zzlxj" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.405872 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.470828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.470920 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.470985 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.471001 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.471052 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.572852 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.572940 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.572989 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.573030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.573047 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.573751 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.575310 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.588335 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.588717 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.593570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.722424 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.753420 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.754334 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.764622 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-8j7tn" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.774914 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.877065 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"kube-state-metrics-0\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.968184 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.978724 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"kube-state-metrics-0\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:27 crc kubenswrapper[4995]: W0126 23:25:26.999042 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5da7bc3d_c0c7_4935_ba58_c64da8c943b0.slice/crio-deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246 WatchSource:0}: Error finding container deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246: Status 404 returned error can't find the container with id deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246 Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.031132 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"kube-state-metrics-0\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.094497 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.231231 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerStarted","Data":"deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246"} Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.357941 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.615929 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.618989 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.624530 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.624934 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632300 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632541 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632680 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632854 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-x7mks" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.655700 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.699853 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.699928 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.699952 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wbp9\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-kube-api-access-2wbp9\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700094 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700195 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802024 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wbp9\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-kube-api-access-2wbp9\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802110 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802148 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802185 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802243 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802269 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802742 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.817383 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.817867 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.819854 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.825757 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.828297 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.831453 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.836627 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wbp9\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-kube-api-access-2wbp9\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.842993 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.848633 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-kglr7" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.849081 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.882303 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.905201 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.905329 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgj9\" (UniqueName: \"kubernetes.io/projected/403406f0-ed75-4c4d-878b-a21885f105d2-kube-api-access-ddgj9\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.947177 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.006777 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.006890 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddgj9\" (UniqueName: \"kubernetes.io/projected/403406f0-ed75-4c4d-878b-a21885f105d2-kube-api-access-ddgj9\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: E0126 23:25:28.007332 4995 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 26 23:25:28 crc kubenswrapper[4995]: E0126 23:25:28.007379 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert podName:403406f0-ed75-4c4d-878b-a21885f105d2 nodeName:}" failed. No retries permitted until 2026-01-26 23:25:28.507364579 +0000 UTC m=+1032.672072044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert") pod "observability-ui-dashboards-66cbf594b5-k62mg" (UID: "403406f0-ed75-4c4d-878b-a21885f105d2") : secret "observability-ui-dashboards" not found Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.032372 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.034627 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.039907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddgj9\" (UniqueName: \"kubernetes.io/projected/403406f0-ed75-4c4d-878b-a21885f105d2-kube-api-access-ddgj9\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045492 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045646 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045739 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-wlv4m" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045805 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045923 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045953 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.046063 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.046094 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.055510 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.125913 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142280 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142428 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142511 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142572 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142650 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142692 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142761 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142829 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142859 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.201532 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fdbdb9c5b-g5zw8"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.203294 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.231262 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdbdb9c5b-g5zw8"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243816 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243849 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243912 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243976 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244026 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244056 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244175 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.246720 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.249445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerStarted","Data":"af898602486bbd8c6c6157c2639e73c909ad485c5d6cbfe7b28ea19f3b85c23d"} Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.250370 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.250816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.251084 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.251157 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07692cb0263c36332c1ef11dc7b21734b21031d82ebacc820f394211727ef21a/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.252044 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.252474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.264173 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.267055 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerStarted","Data":"1fe63fca4fd6cb5199a750cf9e863e7fdd11939b8e0ee09e81633ccef9bdd3c7"} Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.272501 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.276832 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.277960 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347163 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-service-ca\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347468 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-oauth-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347496 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-oauth-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347602 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-console-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347680 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw7hw\" (UniqueName: \"kubernetes.io/projected/7719e2c4-1e5e-4b93-b161-9126b700549f-kube-api-access-qw7hw\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347706 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-trusted-ca-bundle\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.388095 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.433062 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.452922 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw7hw\" (UniqueName: \"kubernetes.io/projected/7719e2c4-1e5e-4b93-b161-9126b700549f-kube-api-access-qw7hw\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.452966 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-trusted-ca-bundle\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-service-ca\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453039 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-oauth-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453058 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-oauth-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453127 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453146 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-console-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453919 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-service-ca\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.454853 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-trusted-ca-bundle\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.458211 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.459349 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-oauth-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.460282 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-console-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.469949 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-oauth-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.493795 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw7hw\" (UniqueName: \"kubernetes.io/projected/7719e2c4-1e5e-4b93-b161-9126b700549f-kube-api-access-qw7hw\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.521562 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.558536 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.563675 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.691164 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.800343 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:29 crc kubenswrapper[4995]: I0126 23:25:29.290133 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"0e6bdda80d541431db425ed666d561751c57a4ce5bae6217b0f3ab0ab6e8e764"} Jan 26 23:25:29 crc kubenswrapper[4995]: I0126 23:25:29.440153 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:25:29 crc kubenswrapper[4995]: I0126 23:25:29.634004 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdbdb9c5b-g5zw8"] Jan 26 23:25:30 crc kubenswrapper[4995]: I0126 23:25:30.131216 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg"] Jan 26 23:25:30 crc kubenswrapper[4995]: I0126 23:25:30.306610 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdbdb9c5b-g5zw8" event={"ID":"7719e2c4-1e5e-4b93-b161-9126b700549f","Type":"ContainerStarted","Data":"5ab3a506eb87ef5b1df90e8fcfcb421114b4618b74afb68aaf45363a6d9c0689"} Jan 26 23:25:30 crc kubenswrapper[4995]: I0126 23:25:30.308465 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"09e644cca6d7bb2e34c3abbe27a572044fa392307e8fabe836e1c584f958c8a8"} Jan 26 23:25:30 crc kubenswrapper[4995]: W0126 23:25:30.314180 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403406f0_ed75_4c4d_878b_a21885f105d2.slice/crio-d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155 WatchSource:0}: Error finding container d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155: Status 404 returned error can't find the container with id d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155 Jan 26 23:25:31 crc kubenswrapper[4995]: I0126 23:25:31.316928 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" event={"ID":"403406f0-ed75-4c4d-878b-a21885f105d2","Type":"ContainerStarted","Data":"d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155"} Jan 26 23:25:32 crc kubenswrapper[4995]: I0126 23:25:32.338640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdbdb9c5b-g5zw8" event={"ID":"7719e2c4-1e5e-4b93-b161-9126b700549f","Type":"ContainerStarted","Data":"979f3ec3e484c57d06aebd49b07a0577cf050e7bc007763d171b1dd8799396e6"} Jan 26 23:25:32 crc kubenswrapper[4995]: I0126 23:25:32.382344 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fdbdb9c5b-g5zw8" podStartSLOduration=4.382327718 podStartE2EDuration="4.382327718s" podCreationTimestamp="2026-01-26 23:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:25:32.380358649 +0000 UTC m=+1036.545066114" watchObservedRunningTime="2026-01-26 23:25:32.382327718 +0000 UTC m=+1036.547035183" Jan 26 23:25:38 crc kubenswrapper[4995]: I0126 23:25:38.526967 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:38 crc kubenswrapper[4995]: I0126 23:25:38.527587 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:38 crc kubenswrapper[4995]: I0126 23:25:38.527913 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:39 crc kubenswrapper[4995]: I0126 23:25:39.391839 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:39 crc kubenswrapper[4995]: I0126 23:25:39.485048 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.893395 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.893460 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.893512 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.894235 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.894585 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576" gracePeriod=600 Jan 26 23:25:41 crc kubenswrapper[4995]: E0126 23:25:41.846382 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 23:25:41 crc kubenswrapper[4995]: E0126 23:25:41.846648 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hflxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_watcher-kuttl-default(5da7bc3d-c0c7-4935-ba58-c64da8c943b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:41 crc kubenswrapper[4995]: E0126 23:25:41.847923 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="5da7bc3d-c0c7-4935-ba58-c64da8c943b0" Jan 26 23:25:42 crc kubenswrapper[4995]: I0126 23:25:42.433628 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576" exitCode=0 Jan 26 23:25:42 crc kubenswrapper[4995]: I0126 23:25:42.433723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576"} Jan 26 23:25:42 crc kubenswrapper[4995]: I0126 23:25:42.433761 4995 scope.go:117] "RemoveContainer" containerID="b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46" Jan 26 23:25:42 crc kubenswrapper[4995]: E0126 23:25:42.436553 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="5da7bc3d-c0c7-4935-ba58-c64da8c943b0" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.270037 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.271024 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q4ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_watcher-kuttl-default(4b909799-2071-4d68-ab55-d29f6e224bf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.272522 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="4b909799-2071-4d68-ab55-d29f6e224bf2" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.444922 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="4b909799-2071-4d68-ab55-d29f6e224bf2" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.119457 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.119722 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n564h5f8h67dh5dbh8chc8h54bhc4h5b5hdh57dh86h678h66ch55h64fh5d9h655h5b5h5dfhdh5b5hc8h556h589h5ffh8dh579hfbh96h697h96q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qjbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_watcher-kuttl-default(37ec7b7e-84e8-4a58-b676-c06ed9a0809e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.120942 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/memcached-0" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.142553 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.142977 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk2rx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_watcher-kuttl-default(54ccebac-5075-4c00-a1e9-ebb66b43876e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.144191 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="54ccebac-5075-4c00-a1e9-ebb66b43876e" Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.452609 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" event={"ID":"403406f0-ed75-4c4d-878b-a21885f105d2","Type":"ContainerStarted","Data":"c3beeaa724ee3bcb7c246b00a87de1cf72babff4456987ffa30e10064d5c865f"} Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.454865 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69"} Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.456616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerStarted","Data":"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f"} Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.456941 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.458521 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="watcher-kuttl-default/memcached-0" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.458854 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="54ccebac-5075-4c00-a1e9-ebb66b43876e" Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.469962 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" podStartSLOduration=3.719457473 podStartE2EDuration="17.469945048s" podCreationTimestamp="2026-01-26 23:25:27 +0000 UTC" firstStartedPulling="2026-01-26 23:25:30.318387857 +0000 UTC m=+1034.483095322" lastFinishedPulling="2026-01-26 23:25:44.068875412 +0000 UTC m=+1048.233582897" observedRunningTime="2026-01-26 23:25:44.468787129 +0000 UTC m=+1048.633494594" watchObservedRunningTime="2026-01-26 23:25:44.469945048 +0000 UTC m=+1048.634652513" Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.558815 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.143186557 podStartE2EDuration="18.558794101s" podCreationTimestamp="2026-01-26 23:25:26 +0000 UTC" firstStartedPulling="2026-01-26 23:25:27.646841037 +0000 UTC m=+1031.811548502" lastFinishedPulling="2026-01-26 23:25:44.062448541 +0000 UTC m=+1048.227156046" observedRunningTime="2026-01-26 23:25:44.553760435 +0000 UTC m=+1048.718467910" watchObservedRunningTime="2026-01-26 23:25:44.558794101 +0000 UTC m=+1048.723501566" Jan 26 23:25:47 crc kubenswrapper[4995]: I0126 23:25:47.480264 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"292a111e21591204ddcff9f67d10ef28cf63c9fa8de4ac90bce69c4c744ab1ac"} Jan 26 23:25:48 crc kubenswrapper[4995]: I0126 23:25:48.492209 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714"} Jan 26 23:25:55 crc kubenswrapper[4995]: I0126 23:25:55.556299 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerStarted","Data":"ebbc3151e1aabd6a0948c68b058ab7ffcc016f7c745ce9b3fb26d1dd0241057b"} Jan 26 23:25:55 crc kubenswrapper[4995]: I0126 23:25:55.563290 4995 generic.go:334] "Generic (PLEG): container finished" podID="5083beb6-ae53-44e5-a82c-872943996b7b" containerID="292a111e21591204ddcff9f67d10ef28cf63c9fa8de4ac90bce69c4c744ab1ac" exitCode=0 Jan 26 23:25:55 crc kubenswrapper[4995]: I0126 23:25:55.563347 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerDied","Data":"292a111e21591204ddcff9f67d10ef28cf63c9fa8de4ac90bce69c4c744ab1ac"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.575415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerStarted","Data":"3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.576629 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.578082 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerStarted","Data":"e7f29c93726d236f06aa9087d1e9d21bb2a28fa032ee9081e34c3fa5089b832d"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.579986 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714" exitCode=0 Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.580007 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.612854 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.057420108 podStartE2EDuration="30.612838032s" podCreationTimestamp="2026-01-26 23:25:26 +0000 UTC" firstStartedPulling="2026-01-26 23:25:27.375257247 +0000 UTC m=+1031.539964712" lastFinishedPulling="2026-01-26 23:25:55.930675161 +0000 UTC m=+1060.095382636" observedRunningTime="2026-01-26 23:25:56.610916404 +0000 UTC m=+1060.775623879" watchObservedRunningTime="2026-01-26 23:25:56.612838032 +0000 UTC m=+1060.777545497" Jan 26 23:25:57 crc kubenswrapper[4995]: I0126 23:25:57.099629 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:58 crc kubenswrapper[4995]: I0126 23:25:58.617623 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"3ba9dbe6094498b682a56dc9388e05547145b296dc917a1fb2de2a1e7531d322"} Jan 26 23:25:58 crc kubenswrapper[4995]: I0126 23:25:58.619804 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerStarted","Data":"17e91e1277b3bd73ad09330618c0692deb641db4b090da3d0321626052d2c9c3"} Jan 26 23:25:59 crc kubenswrapper[4995]: I0126 23:25:59.628717 4995 generic.go:334] "Generic (PLEG): container finished" podID="5da7bc3d-c0c7-4935-ba58-c64da8c943b0" containerID="ebbc3151e1aabd6a0948c68b058ab7ffcc016f7c745ce9b3fb26d1dd0241057b" exitCode=0 Jan 26 23:25:59 crc kubenswrapper[4995]: I0126 23:25:59.628908 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerDied","Data":"ebbc3151e1aabd6a0948c68b058ab7ffcc016f7c745ce9b3fb26d1dd0241057b"} Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.642025 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerStarted","Data":"919fe0961ef22744aeca9b6012860efea116b94a22f6c927f955f56a31555ab2"} Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.646377 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"fa0981348c0a1c624f5558a6dd68d2e8df54a7f49066cbb5483262294a260969"} Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.646898 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.667300 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=7.682104217 podStartE2EDuration="35.66728024s" podCreationTimestamp="2026-01-26 23:25:25 +0000 UTC" firstStartedPulling="2026-01-26 23:25:27.005664909 +0000 UTC m=+1031.170372374" lastFinishedPulling="2026-01-26 23:25:54.990840932 +0000 UTC m=+1059.155548397" observedRunningTime="2026-01-26 23:26:00.666588723 +0000 UTC m=+1064.831296228" watchObservedRunningTime="2026-01-26 23:26:00.66728024 +0000 UTC m=+1064.831987705" Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.695492 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=4.7456273719999995 podStartE2EDuration="33.695466935s" podCreationTimestamp="2026-01-26 23:25:27 +0000 UTC" firstStartedPulling="2026-01-26 23:25:28.917636205 +0000 UTC m=+1033.082343690" lastFinishedPulling="2026-01-26 23:25:57.867475788 +0000 UTC m=+1062.032183253" observedRunningTime="2026-01-26 23:26:00.688159263 +0000 UTC m=+1064.852866758" watchObservedRunningTime="2026-01-26 23:26:00.695466935 +0000 UTC m=+1064.860174420" Jan 26 23:26:01 crc kubenswrapper[4995]: I0126 23:26:01.658863 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:26:01 crc kubenswrapper[4995]: I0126 23:26:01.739237 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 26 23:26:04 crc kubenswrapper[4995]: I0126 23:26:04.563013 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-567f8c8d56-2j2x6" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" containerID="cri-o://e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" gracePeriod=15 Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.536881 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-567f8c8d56-2j2x6_05869402-35d4-4054-845a-e45b6e9ed633/console/0.log" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.537474 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603206 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603259 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603332 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603392 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603430 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603491 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603610 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.604523 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.605378 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config" (OuterVolumeSpecName: "console-config") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.605478 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.605636 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca" (OuterVolumeSpecName: "service-ca") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.609363 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.610425 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd" (OuterVolumeSpecName: "kube-api-access-jdjsd") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "kube-api-access-jdjsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.610507 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706786 4995 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706873 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706892 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706909 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706919 4995 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706929 4995 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706937 4995 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.709430 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b"} Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712535 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-567f8c8d56-2j2x6_05869402-35d4-4054-845a-e45b6e9ed633/console/0.log" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712589 4995 generic.go:334] "Generic (PLEG): container finished" podID="05869402-35d4-4054-845a-e45b6e9ed633" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" exitCode=2 Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712624 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerDied","Data":"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95"} Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712645 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerDied","Data":"caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4"} Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712666 4995 scope.go:117] "RemoveContainer" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712666 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.738351 4995 scope.go:117] "RemoveContainer" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" Jan 26 23:26:05 crc kubenswrapper[4995]: E0126 23:26:05.738754 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95\": container with ID starting with e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95 not found: ID does not exist" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.738790 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95"} err="failed to get container status \"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95\": rpc error: code = NotFound desc = could not find container \"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95\": container with ID starting with e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95 not found: ID does not exist" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.754647 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.762254 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.403286 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.405795 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.526371 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05869402-35d4-4054-845a-e45b6e9ed633" path="/var/lib/kubelet/pods/05869402-35d4-4054-845a-e45b6e9ed633/volumes" Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.789736 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:07 crc kubenswrapper[4995]: I0126 23:26:07.822737 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:08 crc kubenswrapper[4995]: I0126 23:26:08.744284 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932"} Jan 26 23:26:13 crc kubenswrapper[4995]: I0126 23:26:13.795597 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1"} Jan 26 23:26:13 crc kubenswrapper[4995]: I0126 23:26:13.835194 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=4.717680243 podStartE2EDuration="47.835171463s" podCreationTimestamp="2026-01-26 23:25:26 +0000 UTC" firstStartedPulling="2026-01-26 23:25:29.732706514 +0000 UTC m=+1033.897413979" lastFinishedPulling="2026-01-26 23:26:12.850197704 +0000 UTC m=+1077.014905199" observedRunningTime="2026-01-26 23:26:13.826457465 +0000 UTC m=+1077.991164940" watchObservedRunningTime="2026-01-26 23:26:13.835171463 +0000 UTC m=+1077.999878938" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.219027 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:26:15 crc kubenswrapper[4995]: E0126 23:26:15.219426 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.219441 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.219668 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.220325 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.226882 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.247693 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.361359 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.361437 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.464021 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.464177 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.464937 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.506224 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.583995 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.043732 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:26:16 crc kubenswrapper[4995]: W0126 23:26:16.064082 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c339608_1d36_448f_b3cd_00252341cf0d.slice/crio-93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055 WatchSource:0}: Error finding container 93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055: Status 404 returned error can't find the container with id 93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055 Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.319077 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.319969 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.363160 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.440514 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.441631 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.444942 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.449056 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.486949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.487274 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590645 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590756 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590924 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.591712 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.609726 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.661973 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.692248 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.692369 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.693094 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.708633 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.765224 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.823310 4995 generic.go:334] "Generic (PLEG): container finished" podID="2c339608-1d36-448f-b3cd-00252341cf0d" containerID="a3d0cf0c24bcaec0a584ae1322d81bc2cc97c571dfb1efe06bea1c6a8030ba2d" exitCode=0 Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.823345 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-tkjsp" event={"ID":"2c339608-1d36-448f-b3cd-00252341cf0d","Type":"ContainerDied","Data":"a3d0cf0c24bcaec0a584ae1322d81bc2cc97c571dfb1efe06bea1c6a8030ba2d"} Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.823369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-tkjsp" event={"ID":"2c339608-1d36-448f-b3cd-00252341cf0d","Type":"ContainerStarted","Data":"93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055"} Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.913270 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.209218 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:26:17 crc kubenswrapper[4995]: W0126 23:26:17.210815 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94023397_a2e2_42cb_8469_003bc383aeaa.slice/crio-ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2 WatchSource:0}: Error finding container ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2: Status 404 returned error can't find the container with id ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2 Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.218018 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.836449 4995 generic.go:334] "Generic (PLEG): container finished" podID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerID="87d87779d4c3502bc67575e7abc513b3a091bacd50d75b12711b8a101c37d329" exitCode=0 Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.836531 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-4fsqw" event={"ID":"513f0b17-1707-4c0c-bc81-d7ead6a553c8","Type":"ContainerDied","Data":"87d87779d4c3502bc67575e7abc513b3a091bacd50d75b12711b8a101c37d329"} Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.836562 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-4fsqw" event={"ID":"513f0b17-1707-4c0c-bc81-d7ead6a553c8","Type":"ContainerStarted","Data":"246d83749df994b0e10a3947bc98b209d3e9bc7aade4145b32b167ace5893f6b"} Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.839654 4995 generic.go:334] "Generic (PLEG): container finished" podID="94023397-a2e2-42cb-8469-003bc383aeaa" containerID="02cef367fb01441bf0b8a9914fe6804f776043582c13fe0f23584fe155ab9938" exitCode=0 Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.839726 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" event={"ID":"94023397-a2e2-42cb-8469-003bc383aeaa","Type":"ContainerDied","Data":"02cef367fb01441bf0b8a9914fe6804f776043582c13fe0f23584fe155ab9938"} Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.840080 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" event={"ID":"94023397-a2e2-42cb-8469-003bc383aeaa","Type":"ContainerStarted","Data":"ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2"} Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.253204 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.322695 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"2c339608-1d36-448f-b3cd-00252341cf0d\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.322757 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"2c339608-1d36-448f-b3cd-00252341cf0d\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.324055 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c339608-1d36-448f-b3cd-00252341cf0d" (UID: "2c339608-1d36-448f-b3cd-00252341cf0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.333423 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw" (OuterVolumeSpecName: "kube-api-access-hzzkw") pod "2c339608-1d36-448f-b3cd-00252341cf0d" (UID: "2c339608-1d36-448f-b3cd-00252341cf0d"). InnerVolumeSpecName "kube-api-access-hzzkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.424434 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.424471 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.434083 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.852685 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-tkjsp" event={"ID":"2c339608-1d36-448f-b3cd-00252341cf0d","Type":"ContainerDied","Data":"93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055"} Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.852759 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.852849 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.219777 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.224994 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338048 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338152 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"94023397-a2e2-42cb-8469-003bc383aeaa\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338176 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338240 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"94023397-a2e2-42cb-8469-003bc383aeaa\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338586 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "513f0b17-1707-4c0c-bc81-d7ead6a553c8" (UID: "513f0b17-1707-4c0c-bc81-d7ead6a553c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338724 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94023397-a2e2-42cb-8469-003bc383aeaa" (UID: "94023397-a2e2-42cb-8469-003bc383aeaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.342705 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j" (OuterVolumeSpecName: "kube-api-access-h9k6j") pod "513f0b17-1707-4c0c-bc81-d7ead6a553c8" (UID: "513f0b17-1707-4c0c-bc81-d7ead6a553c8"). InnerVolumeSpecName "kube-api-access-h9k6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.347289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn" (OuterVolumeSpecName: "kube-api-access-rn7cn") pod "94023397-a2e2-42cb-8469-003bc383aeaa" (UID: "94023397-a2e2-42cb-8469-003bc383aeaa"). InnerVolumeSpecName "kube-api-access-rn7cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440552 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440874 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440899 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440917 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.865843 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.865866 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-4fsqw" event={"ID":"513f0b17-1707-4c0c-bc81-d7ead6a553c8","Type":"ContainerDied","Data":"246d83749df994b0e10a3947bc98b209d3e9bc7aade4145b32b167ace5893f6b"} Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.865955 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246d83749df994b0e10a3947bc98b209d3e9bc7aade4145b32b167ace5893f6b" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.877980 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" event={"ID":"94023397-a2e2-42cb-8469-003bc383aeaa","Type":"ContainerDied","Data":"ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2"} Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.878043 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.878147 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.435313 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.439334 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.963310 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b909799-2071-4d68-ab55-d29f6e224bf2" containerID="e7f29c93726d236f06aa9087d1e9d21bb2a28fa032ee9081e34c3fa5089b832d" exitCode=0 Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.963444 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerDied","Data":"e7f29c93726d236f06aa9087d1e9d21bb2a28fa032ee9081e34c3fa5089b832d"} Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.966252 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.973248 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerStarted","Data":"2bbeb0b3af7340893132d357f729117c681feea0d49203b5ba6681c3ed9e4488"} Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.974832 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.976877 4995 generic.go:334] "Generic (PLEG): container finished" podID="54ccebac-5075-4c00-a1e9-ebb66b43876e" containerID="17e91e1277b3bd73ad09330618c0692deb641db4b090da3d0321626052d2c9c3" exitCode=0 Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.976958 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerDied","Data":"17e91e1277b3bd73ad09330618c0692deb641db4b090da3d0321626052d2c9c3"} Jan 26 23:26:30 crc kubenswrapper[4995]: I0126 23:26:30.004690 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.702226673 podStartE2EDuration="1m7.004672707s" podCreationTimestamp="2026-01-26 23:25:23 +0000 UTC" firstStartedPulling="2026-01-26 23:25:25.752198185 +0000 UTC m=+1029.916905700" lastFinishedPulling="2026-01-26 23:25:55.054644249 +0000 UTC m=+1059.219351734" observedRunningTime="2026-01-26 23:26:30.002836301 +0000 UTC m=+1094.167543776" watchObservedRunningTime="2026-01-26 23:26:30.004672707 +0000 UTC m=+1094.169380172" Jan 26 23:26:30 crc kubenswrapper[4995]: I0126 23:26:30.989663 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerStarted","Data":"6fcbc0e6cd5e113b3be60c17a9d7503e8bfa7c29370ea8b503ff58089a08b53c"} Jan 26 23:26:30 crc kubenswrapper[4995]: I0126 23:26:30.990308 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.026930 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=-9223371968.82787 podStartE2EDuration="1m8.026907057s" podCreationTimestamp="2026-01-26 23:25:23 +0000 UTC" firstStartedPulling="2026-01-26 23:25:25.482141342 +0000 UTC m=+1029.646848807" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:26:31.018044155 +0000 UTC m=+1095.182751660" watchObservedRunningTime="2026-01-26 23:26:31.026907057 +0000 UTC m=+1095.191614562" Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.254469 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.254799 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" containerID="cri-o://09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b" gracePeriod=600 Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.255394 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" containerID="cri-o://bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932" gracePeriod=600 Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.255367 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" containerID="cri-o://16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1" gracePeriod=600 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008046 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1" exitCode=0 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1"} Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008617 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932"} Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008563 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932" exitCode=0 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008662 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b" exitCode=0 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.009216 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b"} Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.200975 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260542 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260632 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260677 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260716 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260911 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260994 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261034 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261075 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261114 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261152 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261365 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261707 4995 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261725 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261857 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266013 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266310 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out" (OuterVolumeSpecName: "config-out") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266598 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp" (OuterVolumeSpecName: "kube-api-access-wtrzp") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "kube-api-access-wtrzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266962 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.281006 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.283284 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config" (OuterVolumeSpecName: "config") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.286875 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config" (OuterVolumeSpecName: "web-config") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363128 4995 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") on node \"crc\" " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363176 4995 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363193 4995 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363212 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363225 4995 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363238 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363250 4995 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363262 4995 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363275 4995 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.391586 4995 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.391784 4995 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5") on node "crc" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.464681 4995 reconciler_common.go:293] "Volume detached for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.018642 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"09e644cca6d7bb2e34c3abbe27a572044fa392307e8fabe836e1c584f958c8a8"} Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.018706 4995 scope.go:117] "RemoveContainer" containerID="16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.020012 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.044379 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.051201 4995 scope.go:117] "RemoveContainer" containerID="bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.053030 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.072290 4995 scope.go:117] "RemoveContainer" containerID="09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.088643 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089239 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089256 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089286 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="init-config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089292 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="init-config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089306 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089311 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089329 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089335 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089351 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerName="mariadb-database-create" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089356 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerName="mariadb-database-create" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089367 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089374 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089383 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089388 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089516 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089528 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089537 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089546 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089557 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089569 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerName="mariadb-database-create" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.090856 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.100352 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101131 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101437 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101776 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101980 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-wlv4m" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.103158 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.104181 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.109772 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.111307 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.130317 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.148058 4995 scope.go:117] "RemoveContainer" containerID="60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175365 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175407 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zw6\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-kube-api-access-r6zw6\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175442 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175470 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175519 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175540 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175564 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175583 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175601 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/331b761a-fa99-405f-aedf-a94cb456cdfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175625 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175689 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278624 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278696 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278742 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278814 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/331b761a-fa99-405f-aedf-a94cb456cdfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278909 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278962 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279010 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zw6\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-kube-api-access-r6zw6\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279084 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279145 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.284396 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.284914 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.288151 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.289752 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.289926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.291946 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.293057 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.293121 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07692cb0263c36332c1ef11dc7b21734b21031d82ebacc820f394211727ef21a/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.294248 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.297807 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.297968 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.299363 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.315457 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/331b761a-fa99-405f-aedf-a94cb456cdfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.316864 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zw6\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-kube-api-access-r6zw6\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.345012 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.481496 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.893967 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:34 crc kubenswrapper[4995]: I0126 23:26:34.029500 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"0394c2ef21f8fe7b4cbc1ab82e9ff5627689a1a39aad71cbd3c82f561617a208"} Jan 26 23:26:34 crc kubenswrapper[4995]: I0126 23:26:34.528063 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" path="/var/lib/kubelet/pods/0d12a498-5a42-42d5-9ab1-12d436c41187/volumes" Jan 26 23:26:37 crc kubenswrapper[4995]: I0126 23:26:37.059704 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"de647f9ac8aab99780612f34a28a068c79389bc86568af3d6363169cf9cd3e14"} Jan 26 23:26:44 crc kubenswrapper[4995]: I0126 23:26:44.973456 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:26:45 crc kubenswrapper[4995]: I0126 23:26:45.126250 4995 generic.go:334] "Generic (PLEG): container finished" podID="331b761a-fa99-405f-aedf-a94cb456cdfc" containerID="de647f9ac8aab99780612f34a28a068c79389bc86568af3d6363169cf9cd3e14" exitCode=0 Jan 26 23:26:45 crc kubenswrapper[4995]: I0126 23:26:45.126298 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerDied","Data":"de647f9ac8aab99780612f34a28a068c79389bc86568af3d6363169cf9cd3e14"} Jan 26 23:26:45 crc kubenswrapper[4995]: I0126 23:26:45.251286 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:26:46 crc kubenswrapper[4995]: I0126 23:26:46.136345 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"3f7629aff9fe372c60ee1f1fac00dcd8f2a348de8bfcebbf798ef20ad9107e1d"} Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.258876 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.260127 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.262512 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.262736 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.262783 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.263059 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.276498 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.407395 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.407488 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.407602 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.509462 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.509555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.509596 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.515547 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.519381 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.527076 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.577872 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.081885 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:26:48 crc kubenswrapper[4995]: W0126 23:26:48.089321 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad6fb114_59e8_443d_acd9_7241b8ee783c.slice/crio-643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847 WatchSource:0}: Error finding container 643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847: Status 404 returned error can't find the container with id 643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847 Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.149629 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerStarted","Data":"643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847"} Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.152517 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"672ac3cf1ceefa2116be2b4c4e6819a8f341f8a671d408c38acbba77ad970241"} Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.152567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"2847af464585b43a387053c0434865b9b97178de8b8f7bc6e7b4b1d4c2e7dec8"} Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.186387 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=15.186371836 podStartE2EDuration="15.186371836s" podCreationTimestamp="2026-01-26 23:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:26:48.18134457 +0000 UTC m=+1112.346052035" watchObservedRunningTime="2026-01-26 23:26:48.186371836 +0000 UTC m=+1112.351079301" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.481568 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.481607 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.490710 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:49 crc kubenswrapper[4995]: I0126 23:26:49.180245 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:57 crc kubenswrapper[4995]: I0126 23:26:57.254184 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerStarted","Data":"7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95"} Jan 26 23:26:57 crc kubenswrapper[4995]: I0126 23:26:57.269856 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-27jdj" podStartSLOduration=1.756733699 podStartE2EDuration="10.26983843s" podCreationTimestamp="2026-01-26 23:26:47 +0000 UTC" firstStartedPulling="2026-01-26 23:26:48.092744983 +0000 UTC m=+1112.257452438" lastFinishedPulling="2026-01-26 23:26:56.605849684 +0000 UTC m=+1120.770557169" observedRunningTime="2026-01-26 23:26:57.265295916 +0000 UTC m=+1121.430003381" watchObservedRunningTime="2026-01-26 23:26:57.26983843 +0000 UTC m=+1121.434545885" Jan 26 23:27:02 crc kubenswrapper[4995]: I0126 23:27:02.320507 4995 generic.go:334] "Generic (PLEG): container finished" podID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerID="7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95" exitCode=0 Jan 26 23:27:02 crc kubenswrapper[4995]: I0126 23:27:02.320599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerDied","Data":"7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95"} Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.694350 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.824240 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"ad6fb114-59e8-443d-acd9-7241b8ee783c\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.824331 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"ad6fb114-59e8-443d-acd9-7241b8ee783c\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.824370 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"ad6fb114-59e8-443d-acd9-7241b8ee783c\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.833428 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd" (OuterVolumeSpecName: "kube-api-access-clrjd") pod "ad6fb114-59e8-443d-acd9-7241b8ee783c" (UID: "ad6fb114-59e8-443d-acd9-7241b8ee783c"). InnerVolumeSpecName "kube-api-access-clrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.848448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad6fb114-59e8-443d-acd9-7241b8ee783c" (UID: "ad6fb114-59e8-443d-acd9-7241b8ee783c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.867537 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data" (OuterVolumeSpecName: "config-data") pod "ad6fb114-59e8-443d-acd9-7241b8ee783c" (UID: "ad6fb114-59e8-443d-acd9-7241b8ee783c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.925749 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.925780 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.925790 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.374357 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerDied","Data":"643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847"} Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.374412 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.374487 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.497583 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:04 crc kubenswrapper[4995]: E0126 23:27:04.498016 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerName="keystone-db-sync" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.498040 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerName="keystone-db-sync" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.498236 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerName="keystone-db-sync" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.499277 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.502705 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.502878 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.503248 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.503668 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.503921 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.510272 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637740 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637847 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637864 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637886 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637966 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739454 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739577 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739609 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739696 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.746055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.746047 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.748195 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.750708 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.765439 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.770575 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.827419 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.882304 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.902696 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.905579 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.906036 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.906386 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045148 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045620 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045637 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045672 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045689 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045727 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146807 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146849 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146880 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146898 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146969 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146998 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.147953 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.148035 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.152338 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.153301 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.156058 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.157812 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.169019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.240406 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.425047 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.721734 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.395484 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"b30e57df07b8d7a973237b5635e98b0b6195b3d09a9b2387b7e99c853dc62c13"} Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.400450 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerStarted","Data":"314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82"} Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.400480 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerStarted","Data":"6e4ffcbf563bd07699c50048988b8b1ab9175d90361dc0342386b3bb930f4956"} Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.424157 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" podStartSLOduration=2.424130446 podStartE2EDuration="2.424130446s" podCreationTimestamp="2026-01-26 23:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:27:06.416328631 +0000 UTC m=+1130.581036096" watchObservedRunningTime="2026-01-26 23:27:06.424130446 +0000 UTC m=+1130.588837931" Jan 26 23:27:07 crc kubenswrapper[4995]: I0126 23:27:07.042406 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:09 crc kubenswrapper[4995]: I0126 23:27:09.429770 4995 generic.go:334] "Generic (PLEG): container finished" podID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerID="314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82" exitCode=0 Jan 26 23:27:09 crc kubenswrapper[4995]: I0126 23:27:09.429871 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerDied","Data":"314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82"} Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.439641 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908"} Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.839682 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940519 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940587 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940647 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940690 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940740 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940820 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.945514 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx" (OuterVolumeSpecName: "kube-api-access-j7hnx") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "kube-api-access-j7hnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.960011 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.960167 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.962574 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts" (OuterVolumeSpecName: "scripts") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.967643 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data" (OuterVolumeSpecName: "config-data") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.972296 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045024 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045145 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045298 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045322 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045425 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045445 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.456864 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerDied","Data":"6e4ffcbf563bd07699c50048988b8b1ab9175d90361dc0342386b3bb930f4956"} Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.456914 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4ffcbf563bd07699c50048988b8b1ab9175d90361dc0342386b3bb930f4956" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.456943 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.530904 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.537902 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.615426 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:27:11 crc kubenswrapper[4995]: E0126 23:27:11.616038 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerName="keystone-bootstrap" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.616082 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerName="keystone-bootstrap" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.616439 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerName="keystone-bootstrap" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.617417 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625427 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625629 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625818 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625943 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.626118 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.626211 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.755869 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.755922 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756089 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756187 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756253 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756338 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858227 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858325 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858397 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858439 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858526 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.867507 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.868020 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.868235 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.872867 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.873486 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.894885 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.974606 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:12 crc kubenswrapper[4995]: I0126 23:27:12.532634 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" path="/var/lib/kubelet/pods/3f780111-a9d8-4610-ab38-a2d392cf9bfc/volumes" Jan 26 23:27:12 crc kubenswrapper[4995]: I0126 23:27:12.970172 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:27:12 crc kubenswrapper[4995]: W0126 23:27:12.972726 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod049184a2_2d7f_4107_8a72_197fede36e5b.slice/crio-77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849 WatchSource:0}: Error finding container 77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849: Status 404 returned error can't find the container with id 77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849 Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.479249 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerStarted","Data":"7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02"} Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.479660 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerStarted","Data":"77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849"} Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.483759 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618"} Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.514229 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" podStartSLOduration=2.514200438 podStartE2EDuration="2.514200438s" podCreationTimestamp="2026-01-26 23:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:27:13.508909256 +0000 UTC m=+1137.673616761" watchObservedRunningTime="2026-01-26 23:27:13.514200438 +0000 UTC m=+1137.678907943" Jan 26 23:27:16 crc kubenswrapper[4995]: I0126 23:27:16.509542 4995 generic.go:334] "Generic (PLEG): container finished" podID="049184a2-2d7f-4107-8a72-197fede36e5b" containerID="7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02" exitCode=0 Jan 26 23:27:16 crc kubenswrapper[4995]: I0126 23:27:16.510226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerDied","Data":"7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02"} Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.896711 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.974247 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.974333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.974385 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.976049 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.976412 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.976834 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.979493 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.979610 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld" (OuterVolumeSpecName: "kube-api-access-8v9ld") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "kube-api-access-8v9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.984746 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts" (OuterVolumeSpecName: "scripts") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.987422 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.011991 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data" (OuterVolumeSpecName: "config-data") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.020183 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084414 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084451 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084472 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084485 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084498 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084508 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.536045 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.537208 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175"} Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.537285 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerDied","Data":"77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849"} Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.537316 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.633694 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:27:18 crc kubenswrapper[4995]: E0126 23:27:18.634242 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" containerName="keystone-bootstrap" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.634342 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" containerName="keystone-bootstrap" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.634656 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" containerName="keystone-bootstrap" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.635422 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.638612 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.638690 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.639096 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.639172 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.639392 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.642429 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.663644 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697334 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697368 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697525 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697589 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697684 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697736 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697917 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.799955 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800151 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800192 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800249 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800285 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800353 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804147 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804271 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804409 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804458 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.805580 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.805705 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.813221 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.822233 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.958466 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:19 crc kubenswrapper[4995]: I0126 23:27:19.427587 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:27:19 crc kubenswrapper[4995]: I0126 23:27:19.544096 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerStarted","Data":"c831199d822b765352d7f3cfddb29be2235d20cab03abeb963d2d581104d23cb"} Jan 26 23:27:20 crc kubenswrapper[4995]: I0126 23:27:20.553945 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerStarted","Data":"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110"} Jan 26 23:27:20 crc kubenswrapper[4995]: I0126 23:27:20.555424 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:20 crc kubenswrapper[4995]: I0126 23:27:20.584356 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" podStartSLOduration=2.584329851 podStartE2EDuration="2.584329851s" podCreationTimestamp="2026-01-26 23:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:27:20.578213878 +0000 UTC m=+1144.742921363" watchObservedRunningTime="2026-01-26 23:27:20.584329851 +0000 UTC m=+1144.749037336" Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629178 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81"} Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629477 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" containerID="cri-o://c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629641 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" containerID="cri-o://e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629742 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" containerID="cri-o://76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629763 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.632003 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" containerID="cri-o://3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.665379 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.5880977290000002 podStartE2EDuration="23.665354356s" podCreationTimestamp="2026-01-26 23:27:04 +0000 UTC" firstStartedPulling="2026-01-26 23:27:05.719817852 +0000 UTC m=+1129.884525317" lastFinishedPulling="2026-01-26 23:27:26.797074489 +0000 UTC m=+1150.961781944" observedRunningTime="2026-01-26 23:27:27.661348196 +0000 UTC m=+1151.826055661" watchObservedRunningTime="2026-01-26 23:27:27.665354356 +0000 UTC m=+1151.830061821" Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.638473 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" exitCode=0 Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639549 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" exitCode=2 Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639619 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" exitCode=0 Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.638532 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81"} Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639749 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175"} Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639805 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908"} Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.461752 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617133 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617208 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617281 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617412 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617449 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617988 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618093 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618155 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618659 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618709 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.622686 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl" (OuterVolumeSpecName: "kube-api-access-mtpdl") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "kube-api-access-mtpdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.625898 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts" (OuterVolumeSpecName: "scripts") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.641851 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657457 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" exitCode=0 Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657506 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618"} Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657556 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"b30e57df07b8d7a973237b5635e98b0b6195b3d09a9b2387b7e99c853dc62c13"} Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657578 4995 scope.go:117] "RemoveContainer" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.719836 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720232 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720256 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720269 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720283 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.725431 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data" (OuterVolumeSpecName: "config-data") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.765510 4995 scope.go:117] "RemoveContainer" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.786059 4995 scope.go:117] "RemoveContainer" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.803848 4995 scope.go:117] "RemoveContainer" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.817747 4995 scope.go:117] "RemoveContainer" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.818209 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81\": container with ID starting with 3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81 not found: ID does not exist" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818264 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81"} err="failed to get container status \"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81\": rpc error: code = NotFound desc = could not find container \"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81\": container with ID starting with 3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818302 4995 scope.go:117] "RemoveContainer" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.818860 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175\": container with ID starting with 76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175 not found: ID does not exist" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818888 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175"} err="failed to get container status \"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175\": rpc error: code = NotFound desc = could not find container \"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175\": container with ID starting with 76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818908 4995 scope.go:117] "RemoveContainer" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.819163 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618\": container with ID starting with e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618 not found: ID does not exist" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.819195 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618"} err="failed to get container status \"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618\": rpc error: code = NotFound desc = could not find container \"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618\": container with ID starting with e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.819214 4995 scope.go:117] "RemoveContainer" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.820575 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908\": container with ID starting with c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908 not found: ID does not exist" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.820616 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908"} err="failed to get container status \"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908\": rpc error: code = NotFound desc = could not find container \"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908\": container with ID starting with c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.821943 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.822020 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.006794 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.018556 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029220 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029702 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029729 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029759 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029772 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029797 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029810 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029826 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029838 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030171 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030194 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030223 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030240 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.037468 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.037593 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.040145 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.040795 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.118059 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.118916 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-7pjjh log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/ceilometer-0" podUID="88e61da0-4417-469b-a34d-ebdc2c449e85" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.126978 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127033 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127141 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127164 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127318 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127604 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127663 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229209 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229369 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229409 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229458 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229500 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229573 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229607 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.230257 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.230455 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.233625 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.235857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.235925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.237393 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.245627 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.668515 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.702976 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839170 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839271 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839310 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839446 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839472 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839517 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.840548 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.841194 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.846061 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.846293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh" (OuterVolumeSpecName: "kube-api-access-7pjjh") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "kube-api-access-7pjjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.847190 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data" (OuterVolumeSpecName: "config-data") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.847289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts" (OuterVolumeSpecName: "scripts") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.853409 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941631 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941669 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941682 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941690 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941699 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941707 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941717 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.528553 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" path="/var/lib/kubelet/pods/737871cc-e3fc-48e8-983d-10b3171b8fd8/volumes" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.674434 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.759293 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.770497 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.777369 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.779518 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.782074 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.782331 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.784056 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.857934 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858238 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858403 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858501 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858607 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858683 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960082 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960206 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960247 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960277 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960320 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960398 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960449 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960881 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.961515 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.963921 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.964651 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.967753 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.973903 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.981477 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:33 crc kubenswrapper[4995]: I0126 23:27:33.102321 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:33 crc kubenswrapper[4995]: I0126 23:27:33.554749 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:33 crc kubenswrapper[4995]: W0126 23:27:33.560230 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03d11d7_e58e_4d08_85b4_c512e9deb887.slice/crio-b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625 WatchSource:0}: Error finding container b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625: Status 404 returned error can't find the container with id b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625 Jan 26 23:27:33 crc kubenswrapper[4995]: I0126 23:27:33.687870 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625"} Jan 26 23:27:34 crc kubenswrapper[4995]: I0126 23:27:34.527913 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e61da0-4417-469b-a34d-ebdc2c449e85" path="/var/lib/kubelet/pods/88e61da0-4417-469b-a34d-ebdc2c449e85/volumes" Jan 26 23:27:34 crc kubenswrapper[4995]: I0126 23:27:34.698575 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c"} Jan 26 23:27:35 crc kubenswrapper[4995]: I0126 23:27:35.707376 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4"} Jan 26 23:27:36 crc kubenswrapper[4995]: I0126 23:27:36.720849 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326"} Jan 26 23:27:37 crc kubenswrapper[4995]: I0126 23:27:37.731865 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91"} Jan 26 23:27:37 crc kubenswrapper[4995]: I0126 23:27:37.732180 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:37 crc kubenswrapper[4995]: I0126 23:27:37.777471 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.505734491 podStartE2EDuration="5.777448722s" podCreationTimestamp="2026-01-26 23:27:32 +0000 UTC" firstStartedPulling="2026-01-26 23:27:33.563479101 +0000 UTC m=+1157.728186566" lastFinishedPulling="2026-01-26 23:27:36.835193322 +0000 UTC m=+1160.999900797" observedRunningTime="2026-01-26 23:27:37.769657637 +0000 UTC m=+1161.934365112" watchObservedRunningTime="2026-01-26 23:27:37.777448722 +0000 UTC m=+1161.942156187" Jan 26 23:27:50 crc kubenswrapper[4995]: I0126 23:27:50.485272 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.270484 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.272657 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.273143 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.287740 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.287827 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-r4pnv" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.288026 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.288086 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.293186 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.300515 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: E0126 23:27:52.305871 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-hwg4s openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-hwg4s openstack-config openstack-config-secret]: context canceled" pod="watcher-kuttl-default/openstackclient" podUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.311159 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.312475 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.315523 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324616 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2pjk\" (UniqueName: \"kubernetes.io/projected/f27553d1-06f5-4e72-9d14-714d48fbd854-kube-api-access-l2pjk\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324654 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config-secret\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324688 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324719 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428524 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2pjk\" (UniqueName: \"kubernetes.io/projected/f27553d1-06f5-4e72-9d14-714d48fbd854-kube-api-access-l2pjk\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428579 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config-secret\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428623 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428659 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.429703 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.444141 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.445578 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config-secret\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.465754 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2pjk\" (UniqueName: \"kubernetes.io/projected/f27553d1-06f5-4e72-9d14-714d48fbd854-kube-api-access-l2pjk\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.525683 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" path="/var/lib/kubelet/pods/5eee249b-5796-4844-a0e8-ae9fceb1ed44/volumes" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.628878 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.115554 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.253827 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.253814 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"f27553d1-06f5-4e72-9d14-714d48fbd854","Type":"ContainerStarted","Data":"ceabf1d326bea8f0d86c6be859d72952a99fe400b127cbdc9c9578e248ba9271"} Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.258150 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.268776 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.272659 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:54 crc kubenswrapper[4995]: I0126 23:27:54.261275 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:54 crc kubenswrapper[4995]: I0126 23:27:54.265178 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:54 crc kubenswrapper[4995]: I0126 23:27:54.276772 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:28:03 crc kubenswrapper[4995]: I0126 23:28:03.111869 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:03 crc kubenswrapper[4995]: I0126 23:28:03.342567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"f27553d1-06f5-4e72-9d14-714d48fbd854","Type":"ContainerStarted","Data":"ba7945f7293bcf7b5a5ab4ffeb2793509255cd0a694a783c3fff9fa88b57d590"} Jan 26 23:28:03 crc kubenswrapper[4995]: I0126 23:28:03.366940 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=3.312140655 podStartE2EDuration="12.366919396s" podCreationTimestamp="2026-01-26 23:27:51 +0000 UTC" firstStartedPulling="2026-01-26 23:27:53.111428702 +0000 UTC m=+1177.276136167" lastFinishedPulling="2026-01-26 23:28:02.166207443 +0000 UTC m=+1186.330914908" observedRunningTime="2026-01-26 23:28:03.36188784 +0000 UTC m=+1187.526595305" watchObservedRunningTime="2026-01-26 23:28:03.366919396 +0000 UTC m=+1187.531626881" Jan 26 23:28:05 crc kubenswrapper[4995]: I0126 23:28:05.694519 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:05 crc kubenswrapper[4995]: I0126 23:28:05.695806 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" containerID="cri-o://dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.106429 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.259539 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.267356 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4" (OuterVolumeSpecName: "kube-api-access-2vjq4") pod "f3e7ef92-19e4-45be-ba39-e8c1b10c2110" (UID: "f3e7ef92-19e4-45be-ba39-e8c1b10c2110"). InnerVolumeSpecName "kube-api-access-2vjq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.361716 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365655 4995 generic.go:334] "Generic (PLEG): container finished" podID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" exitCode=2 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365697 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerDied","Data":"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f"} Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365729 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerDied","Data":"af898602486bbd8c6c6157c2639e73c909ad485c5d6cbfe7b28ea19f3b85c23d"} Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365746 4995 scope.go:117] "RemoveContainer" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365764 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.390442 4995 scope.go:117] "RemoveContainer" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" Jan 26 23:28:06 crc kubenswrapper[4995]: E0126 23:28:06.396098 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f\": container with ID starting with dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f not found: ID does not exist" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.396183 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f"} err="failed to get container status \"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f\": rpc error: code = NotFound desc = could not find container \"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f\": container with ID starting with dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f not found: ID does not exist" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.407761 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.424257 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.431876 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: E0126 23:28:06.432235 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.432252 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.432413 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.433067 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.439556 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.439556 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.444850 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.525967 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" path="/var/lib/kubelet/pods/f3e7ef92-19e4-45be-ba39-e8c1b10c2110/volumes" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564313 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564371 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564418 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6zk\" (UniqueName: \"kubernetes.io/projected/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-api-access-5r6zk\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564530 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.665978 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6zk\" (UniqueName: \"kubernetes.io/projected/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-api-access-5r6zk\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.666117 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.666191 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.666218 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.671329 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.676970 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.682294 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6zk\" (UniqueName: \"kubernetes.io/projected/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-api-access-5r6zk\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.682889 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.763404 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.773053 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.774781 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" containerID="cri-o://36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.774960 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" containerID="cri-o://3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.775017 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" containerID="cri-o://9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.775058 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" containerID="cri-o://56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" gracePeriod=30 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.258150 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376414 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" exitCode=0 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376670 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" exitCode=2 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376681 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" exitCode=0 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376472 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91"} Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376792 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326"} Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c"} Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.378008 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"86cef714-2c2e-4825-bab7-c653df90a3c2","Type":"ContainerStarted","Data":"9bce68767181e4090036940686dd8e2de04500a5fba4896213b150fe5871ac82"} Jan 26 23:28:08 crc kubenswrapper[4995]: I0126 23:28:08.387375 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"86cef714-2c2e-4825-bab7-c653df90a3c2","Type":"ContainerStarted","Data":"1acae8244e0225e2d330296570ce9e2e40e184a2f56eb510025212d14673224c"} Jan 26 23:28:08 crc kubenswrapper[4995]: I0126 23:28:08.387800 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:08 crc kubenswrapper[4995]: I0126 23:28:08.408425 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.050775055 podStartE2EDuration="2.408393069s" podCreationTimestamp="2026-01-26 23:28:06 +0000 UTC" firstStartedPulling="2026-01-26 23:28:07.252005462 +0000 UTC m=+1191.416712927" lastFinishedPulling="2026-01-26 23:28:07.609623466 +0000 UTC m=+1191.774330941" observedRunningTime="2026-01-26 23:28:08.400490911 +0000 UTC m=+1192.565198376" watchObservedRunningTime="2026-01-26 23:28:08.408393069 +0000 UTC m=+1192.573100564" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.052056 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.053285 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.070435 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.160310 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.161369 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.163452 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.184516 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.205249 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.205328 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306524 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306582 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306616 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306650 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.307444 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.326947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.369943 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.415267 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.415328 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.417314 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.437068 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.481448 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.630617 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.060700 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:28:10 crc kubenswrapper[4995]: W0126 23:28:10.076161 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26594adb_ad3b_4555_a2a2_085ac874b80f.slice/crio-fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934 WatchSource:0}: Error finding container fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934: Status 404 returned error can't find the container with id fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934 Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.414902 4995 generic.go:334] "Generic (PLEG): container finished" podID="81de5920-673a-4656-812a-cd9418a924ad" containerID="21fc0623b802d82a641a134593a6142947f12ed59ae9a3e0731b353104bba872" exitCode=0 Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.414970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hmlpp" event={"ID":"81de5920-673a-4656-812a-cd9418a924ad","Type":"ContainerDied","Data":"21fc0623b802d82a641a134593a6142947f12ed59ae9a3e0731b353104bba872"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.415306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hmlpp" event={"ID":"81de5920-673a-4656-812a-cd9418a924ad","Type":"ContainerStarted","Data":"0d1ae533ea537da47e0e95a9947b701f4ba1a47c8850e06cdbb1c339c7758d17"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.416893 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerStarted","Data":"5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.416939 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerStarted","Data":"fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.452847 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" podStartSLOduration=1.452807395 podStartE2EDuration="1.452807395s" podCreationTimestamp="2026-01-26 23:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:28:10.451487722 +0000 UTC m=+1194.616195197" watchObservedRunningTime="2026-01-26 23:28:10.452807395 +0000 UTC m=+1194.617514860" Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.894316 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.894365 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.213696 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.252909 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.252989 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253027 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253069 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253193 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253221 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253870 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.254297 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.259356 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg" (OuterVolumeSpecName: "kube-api-access-s4dwg") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "kube-api-access-s4dwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.269732 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts" (OuterVolumeSpecName: "scripts") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.286846 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.335815 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.355217 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data" (OuterVolumeSpecName: "config-data") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.355733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356214 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356566 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356592 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356605 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356618 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356629 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356639 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.424842 4995 generic.go:334] "Generic (PLEG): container finished" podID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerID="5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964" exitCode=0 Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.424927 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerDied","Data":"5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964"} Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427288 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" exitCode=0 Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427331 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4"} Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427379 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625"} Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427386 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427397 4995 scope.go:117] "RemoveContainer" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.449897 4995 scope.go:117] "RemoveContainer" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.463355 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.469707 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.488139 4995 scope.go:117] "RemoveContainer" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.500067 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.500718 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.500793 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.500865 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.500923 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.501011 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501071 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.501161 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501227 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501476 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501555 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501627 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501696 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.504340 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.509389 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.510200 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.510329 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.527698 4995 scope.go:117] "RemoveContainer" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.534853 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.581197 4995 scope.go:117] "RemoveContainer" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.582808 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91\": container with ID starting with 3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91 not found: ID does not exist" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.582917 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91"} err="failed to get container status \"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91\": rpc error: code = NotFound desc = could not find container \"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91\": container with ID starting with 3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91 not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.582991 4995 scope.go:117] "RemoveContainer" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.584019 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326\": container with ID starting with 9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326 not found: ID does not exist" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584067 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326"} err="failed to get container status \"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326\": rpc error: code = NotFound desc = could not find container \"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326\": container with ID starting with 9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326 not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584096 4995 scope.go:117] "RemoveContainer" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.584492 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4\": container with ID starting with 56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4 not found: ID does not exist" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584515 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4"} err="failed to get container status \"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4\": rpc error: code = NotFound desc = could not find container \"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4\": container with ID starting with 56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4 not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584531 4995 scope.go:117] "RemoveContainer" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.584724 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c\": container with ID starting with 36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c not found: ID does not exist" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584745 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c"} err="failed to get container status \"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c\": rpc error: code = NotFound desc = could not find container \"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c\": container with ID starting with 36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662670 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662698 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662724 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662776 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662800 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662833 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764154 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764232 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764256 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764280 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764312 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764353 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764369 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764389 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764908 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.765230 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.769298 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.770092 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.770694 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.772582 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.788356 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.790167 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.825881 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.829661 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.972681 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"81de5920-673a-4656-812a-cd9418a924ad\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.972759 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"81de5920-673a-4656-812a-cd9418a924ad\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.973704 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81de5920-673a-4656-812a-cd9418a924ad" (UID: "81de5920-673a-4656-812a-cd9418a924ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.974169 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.977725 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb" (OuterVolumeSpecName: "kube-api-access-cwxdb") pod "81de5920-673a-4656-812a-cd9418a924ad" (UID: "81de5920-673a-4656-812a-cd9418a924ad"). InnerVolumeSpecName "kube-api-access-cwxdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.075757 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.271867 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.436439 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hmlpp" event={"ID":"81de5920-673a-4656-812a-cd9418a924ad","Type":"ContainerDied","Data":"0d1ae533ea537da47e0e95a9947b701f4ba1a47c8850e06cdbb1c339c7758d17"} Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.436782 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1ae533ea537da47e0e95a9947b701f4ba1a47c8850e06cdbb1c339c7758d17" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.436467 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.438790 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"dbe0ac9e615dc8b84fc279cb1855295fa12e48224c261aad6672dc012a0042f7"} Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.533001 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" path="/var/lib/kubelet/pods/f03d11d7-e58e-4d08-85b4-c512e9deb887/volumes" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.865957 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.990309 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"26594adb-ad3b-4555-a2a2-085ac874b80f\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.990465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"26594adb-ad3b-4555-a2a2-085ac874b80f\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.996624 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26594adb-ad3b-4555-a2a2-085ac874b80f" (UID: "26594adb-ad3b-4555-a2a2-085ac874b80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.002274 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v" (OuterVolumeSpecName: "kube-api-access-d7n5v") pod "26594adb-ad3b-4555-a2a2-085ac874b80f" (UID: "26594adb-ad3b-4555-a2a2-085ac874b80f"). InnerVolumeSpecName "kube-api-access-d7n5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.091979 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.092018 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.450114 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerDied","Data":"fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934"} Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.450155 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.450204 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.452739 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33"} Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.398571 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:28:14 crc kubenswrapper[4995]: E0126 23:28:14.399349 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81de5920-673a-4656-812a-cd9418a924ad" containerName="mariadb-database-create" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399361 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="81de5920-673a-4656-812a-cd9418a924ad" containerName="mariadb-database-create" Jan 26 23:28:14 crc kubenswrapper[4995]: E0126 23:28:14.399384 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerName="mariadb-account-create-update" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399390 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerName="mariadb-account-create-update" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399525 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerName="mariadb-account-create-update" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399543 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="81de5920-673a-4656-812a-cd9418a924ad" containerName="mariadb-database-create" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.400033 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.401909 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.402176 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-wtfkp" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.416883 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.485879 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0"} Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.486171 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885"} Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515364 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515497 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515526 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.617260 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.617354 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.618057 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.618217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.621844 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.621876 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.625468 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.650616 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.727943 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:15 crc kubenswrapper[4995]: I0126 23:28:15.169189 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:28:15 crc kubenswrapper[4995]: I0126 23:28:15.498217 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerStarted","Data":"388c50d18a23e10226005bb761b532a970180186cfbc322bfd8ac5e8e2e0d0dd"} Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.547506 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a"} Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.548012 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.595147 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.037417901 podStartE2EDuration="5.595127609s" podCreationTimestamp="2026-01-26 23:28:11 +0000 UTC" firstStartedPulling="2026-01-26 23:28:12.27759578 +0000 UTC m=+1196.442303245" lastFinishedPulling="2026-01-26 23:28:15.835305488 +0000 UTC m=+1200.000012953" observedRunningTime="2026-01-26 23:28:16.589909299 +0000 UTC m=+1200.754616764" watchObservedRunningTime="2026-01-26 23:28:16.595127609 +0000 UTC m=+1200.759835074" Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.790766 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.250515 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.251158 4995 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.251298 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zppxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-7k9rx_watcher-kuttl-default(c14084ec-5346-48a7-8e93-0d4638601584): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.252540 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" podUID="c14084ec-5346-48a7-8e93-0d4638601584" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.682387 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" podUID="c14084ec-5346-48a7-8e93-0d4638601584" Jan 26 23:28:40 crc kubenswrapper[4995]: I0126 23:28:40.894373 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:28:40 crc kubenswrapper[4995]: I0126 23:28:40.895080 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:28:41 crc kubenswrapper[4995]: I0126 23:28:41.834819 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:48 crc kubenswrapper[4995]: I0126 23:28:48.817365 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerStarted","Data":"2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb"} Jan 26 23:28:48 crc kubenswrapper[4995]: I0126 23:28:48.837225 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" podStartSLOduration=2.364789971 podStartE2EDuration="34.83720559s" podCreationTimestamp="2026-01-26 23:28:14 +0000 UTC" firstStartedPulling="2026-01-26 23:28:15.169724379 +0000 UTC m=+1199.334431844" lastFinishedPulling="2026-01-26 23:28:47.642139998 +0000 UTC m=+1231.806847463" observedRunningTime="2026-01-26 23:28:48.836005 +0000 UTC m=+1233.000712455" watchObservedRunningTime="2026-01-26 23:28:48.83720559 +0000 UTC m=+1233.001913055" Jan 26 23:28:51 crc kubenswrapper[4995]: I0126 23:28:51.845338 4995 generic.go:334] "Generic (PLEG): container finished" podID="c14084ec-5346-48a7-8e93-0d4638601584" containerID="2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb" exitCode=0 Jan 26 23:28:51 crc kubenswrapper[4995]: I0126 23:28:51.845456 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerDied","Data":"2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb"} Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.247501 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307728 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307847 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307895 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.334024 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.351795 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.352306 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx" (OuterVolumeSpecName: "kube-api-access-zppxx") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "kube-api-access-zppxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.396212 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data" (OuterVolumeSpecName: "config-data") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409097 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409164 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409177 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409189 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.864426 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerDied","Data":"388c50d18a23e10226005bb761b532a970180186cfbc322bfd8ac5e8e2e0d0dd"} Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.864480 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388c50d18a23e10226005bb761b532a970180186cfbc322bfd8ac5e8e2e0d0dd" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.864511 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.337964 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: E0126 23:28:54.338676 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14084ec-5346-48a7-8e93-0d4638601584" containerName="watcher-kuttl-db-sync" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.338704 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14084ec-5346-48a7-8e93-0d4638601584" containerName="watcher-kuttl-db-sync" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.338908 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14084ec-5346-48a7-8e93-0d4638601584" containerName="watcher-kuttl-db-sync" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.339586 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.341879 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-wtfkp" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.341997 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.346978 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.348727 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.351628 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.356637 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.366282 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.423559 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425777 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425831 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425861 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425881 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425908 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425926 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425992 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.426006 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.426066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.426176 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.428551 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.449850 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526839 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526879 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526925 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526967 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526982 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527194 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527634 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527694 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527783 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527852 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527903 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527928 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.528665 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.532497 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.536435 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.536749 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.536859 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.537706 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.541064 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.543744 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.544048 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.551470 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629273 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629337 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629578 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629627 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629847 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.633122 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.634175 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.648970 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.659864 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.672585 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.748803 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.138414 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.217033 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:28:55 crc kubenswrapper[4995]: W0126 23:28:55.218969 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod137b9b9c_ff0c_461b_9731_8322ae411e99.slice/crio-4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263 WatchSource:0}: Error finding container 4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263: Status 404 returned error can't find the container with id 4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263 Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.272054 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:28:55 crc kubenswrapper[4995]: W0126 23:28:55.284388 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754307e8_af63_4e45_8bbe_b4daf4ba4e1e.slice/crio-1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb WatchSource:0}: Error finding container 1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb: Status 404 returned error can't find the container with id 1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.886633 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerStarted","Data":"4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.888639 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerStarted","Data":"1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891269 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerStarted","Data":"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891315 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerStarted","Data":"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891326 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerStarted","Data":"eb2bd9a9347adb71b2820f1e1c4d33905377b5c57d14b319ec1266892f2f2ad3"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891582 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.916131 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.9160877649999999 podStartE2EDuration="1.916087765s" podCreationTimestamp="2026-01-26 23:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:28:55.907794498 +0000 UTC m=+1240.072501963" watchObservedRunningTime="2026-01-26 23:28:55.916087765 +0000 UTC m=+1240.080795240" Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.898960 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerStarted","Data":"b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432"} Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.902730 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerStarted","Data":"38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8"} Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.919385 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.869192154 podStartE2EDuration="2.91935716s" podCreationTimestamp="2026-01-26 23:28:54 +0000 UTC" firstStartedPulling="2026-01-26 23:28:55.286795681 +0000 UTC m=+1239.451503166" lastFinishedPulling="2026-01-26 23:28:56.336960677 +0000 UTC m=+1240.501668172" observedRunningTime="2026-01-26 23:28:56.91491398 +0000 UTC m=+1241.079621445" watchObservedRunningTime="2026-01-26 23:28:56.91935716 +0000 UTC m=+1241.084064625" Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.935331 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.826749575 podStartE2EDuration="2.935314099s" podCreationTimestamp="2026-01-26 23:28:54 +0000 UTC" firstStartedPulling="2026-01-26 23:28:55.220878596 +0000 UTC m=+1239.385586061" lastFinishedPulling="2026-01-26 23:28:56.32944312 +0000 UTC m=+1240.494150585" observedRunningTime="2026-01-26 23:28:56.931572245 +0000 UTC m=+1241.096279710" watchObservedRunningTime="2026-01-26 23:28:56.935314099 +0000 UTC m=+1241.100021564" Jan 26 23:28:58 crc kubenswrapper[4995]: I0126 23:28:58.503557 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:59 crc kubenswrapper[4995]: I0126 23:28:59.686067 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:59 crc kubenswrapper[4995]: I0126 23:28:59.750459 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.660076 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.673677 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.680472 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.683721 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.749586 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.777730 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.976394 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.979891 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:05 crc kubenswrapper[4995]: I0126 23:29:05.006650 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:05 crc kubenswrapper[4995]: I0126 23:29:05.010591 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.214505 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215123 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" containerID="cri-o://7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215206 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" containerID="cri-o://f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215250 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" containerID="cri-o://9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215376 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" containerID="cri-o://b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.361691 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.373546 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.437345 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.438642 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.451332 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.451608 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" containerID="cri-o://b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.460826 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.478883 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.551321 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.551528 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" containerID="cri-o://7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.551907 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" containerID="cri-o://309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.576549 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.576674 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.678589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.679476 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.679676 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.707040 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.781285 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.032291 4995 generic.go:334] "Generic (PLEG): container finished" podID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" exitCode=143 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.032531 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerDied","Data":"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045451 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" exitCode=0 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045469 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" exitCode=2 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045478 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" exitCode=0 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045613 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" containerID="cri-o://38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8" gracePeriod=30 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045860 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045884 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045893 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.310654 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:08 crc kubenswrapper[4995]: W0126 23:29:08.327596 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d491283_0ac3_4f24_88c1_6a380d594919.slice/crio-4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066 WatchSource:0}: Error finding container 4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066: Status 404 returned error can't find the container with id 4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.532119 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14084ec-5346-48a7-8e93-0d4638601584" path="/var/lib/kubelet/pods/c14084ec-5346-48a7-8e93-0d4638601584/volumes" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.749078 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806136 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806203 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806235 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806296 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806398 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.808568 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs" (OuterVolumeSpecName: "logs") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.813036 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph" (OuterVolumeSpecName: "kube-api-access-gxvph") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "kube-api-access-gxvph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.835305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.846397 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.866009 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data" (OuterVolumeSpecName: "config-data") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907402 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907441 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907453 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907465 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907478 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.054689 4995 generic.go:334] "Generic (PLEG): container finished" podID="1d491283-0ac3-4f24-88c1-6a380d594919" containerID="558c3ee7288987b85477ab6a956972ed10ae51e028f06cd7ca485975cd8be8ff" exitCode=0 Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.054754 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" event={"ID":"1d491283-0ac3-4f24-88c1-6a380d594919","Type":"ContainerDied","Data":"558c3ee7288987b85477ab6a956972ed10ae51e028f06cd7ca485975cd8be8ff"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.054779 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" event={"ID":"1d491283-0ac3-4f24-88c1-6a380d594919","Type":"ContainerStarted","Data":"4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056739 4995 generic.go:334] "Generic (PLEG): container finished" podID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" exitCode=0 Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056802 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerDied","Data":"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056812 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056847 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerDied","Data":"eb2bd9a9347adb71b2820f1e1c4d33905377b5c57d14b319ec1266892f2f2ad3"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056876 4995 scope.go:117] "RemoveContainer" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.087011 4995 scope.go:117] "RemoveContainer" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.102758 4995 scope.go:117] "RemoveContainer" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.103158 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826\": container with ID starting with 309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826 not found: ID does not exist" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.103200 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826"} err="failed to get container status \"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826\": rpc error: code = NotFound desc = could not find container \"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826\": container with ID starting with 309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826 not found: ID does not exist" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.103226 4995 scope.go:117] "RemoveContainer" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.103446 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22\": container with ID starting with 7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22 not found: ID does not exist" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.103464 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22"} err="failed to get container status \"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22\": rpc error: code = NotFound desc = could not find container \"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22\": container with ID starting with 7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22 not found: ID does not exist" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.107664 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.114427 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.755950 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.762538 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.764410 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.764472 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067219 4995 generic.go:334] "Generic (PLEG): container finished" podID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerID="38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8" exitCode=0 Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067295 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerDied","Data":"38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8"} Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067337 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerDied","Data":"4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263"} Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067349 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.097679 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.230738 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231233 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs" (OuterVolumeSpecName: "logs") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231394 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231431 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231539 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231588 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231984 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.243341 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g" (OuterVolumeSpecName: "kube-api-access-8n57g") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "kube-api-access-8n57g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.262089 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.290407 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.310345 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data" (OuterVolumeSpecName: "config-data") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.337932 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.337983 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.337999 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.338011 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.449393 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.530128 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" path="/var/lib/kubelet/pods/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a/volumes" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.642163 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"1d491283-0ac3-4f24-88c1-6a380d594919\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.642593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"1d491283-0ac3-4f24-88c1-6a380d594919\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.643253 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d491283-0ac3-4f24-88c1-6a380d594919" (UID: "1d491283-0ac3-4f24-88c1-6a380d594919"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.650864 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt" (OuterVolumeSpecName: "kube-api-access-jcfxt") pod "1d491283-0ac3-4f24-88c1-6a380d594919" (UID: "1d491283-0ac3-4f24-88c1-6a380d594919"). InnerVolumeSpecName "kube-api-access-jcfxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.744290 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.744332 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.894288 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.894379 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.894443 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.895460 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.895559 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69" gracePeriod=600 Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.093560 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69" exitCode=0 Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.093645 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69"} Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.094026 4995 scope.go:117] "RemoveContainer" containerID="c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.102380 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.102412 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.102415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" event={"ID":"1d491283-0ac3-4f24-88c1-6a380d594919","Type":"ContainerDied","Data":"4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066"} Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.103611 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.135510 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.145320 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.725844 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760582 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760624 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760646 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760697 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760851 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760901 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760929 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.762009 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.764000 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.769440 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq" (OuterVolumeSpecName: "kube-api-access-xwvvq") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "kube-api-access-xwvvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.772597 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts" (OuterVolumeSpecName: "scripts") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.814657 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.815303 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.854081 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862651 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862695 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862709 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862720 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862733 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862745 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862757 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.896297 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data" (OuterVolumeSpecName: "config-data") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.963732 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.131541 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570"} Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.134983 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" exitCode=0 Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.135221 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885"} Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.135316 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"dbe0ac9e615dc8b84fc279cb1855295fa12e48224c261aad6672dc012a0042f7"} Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.135373 4995 scope.go:117] "RemoveContainer" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.136340 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.162267 4995 scope.go:117] "RemoveContainer" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.191221 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.205165 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213216 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213574 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213593 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213604 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213611 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213624 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213631 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213644 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213650 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213659 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213665 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213676 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" containerName="mariadb-account-delete" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213681 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" containerName="mariadb-account-delete" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213694 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213700 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213708 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213738 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213932 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213948 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213961 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213974 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" containerName="mariadb-account-delete" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213985 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213992 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.214002 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.214013 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.215782 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.219643 4995 scope.go:117] "RemoveContainer" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.219904 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.219959 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.220245 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.235757 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.263362 4995 scope.go:117] "RemoveContainer" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270528 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270604 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270638 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270667 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270757 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270787 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270835 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.288659 4995 scope.go:117] "RemoveContainer" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.289446 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a\": container with ID starting with b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a not found: ID does not exist" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289495 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a"} err="failed to get container status \"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a\": rpc error: code = NotFound desc = could not find container \"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a\": container with ID starting with b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289528 4995 scope.go:117] "RemoveContainer" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.289866 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0\": container with ID starting with f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0 not found: ID does not exist" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289890 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0"} err="failed to get container status \"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0\": rpc error: code = NotFound desc = could not find container \"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0\": container with ID starting with f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0 not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289908 4995 scope.go:117] "RemoveContainer" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.290344 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885\": container with ID starting with 9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885 not found: ID does not exist" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.290372 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885"} err="failed to get container status \"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885\": rpc error: code = NotFound desc = could not find container \"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885\": container with ID starting with 9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885 not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.290391 4995 scope.go:117] "RemoveContainer" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.290670 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33\": container with ID starting with 7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33 not found: ID does not exist" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.290698 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33"} err="failed to get container status \"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33\": rpc error: code = NotFound desc = could not find container \"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33\": container with ID starting with 7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33 not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.371951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372020 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372058 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372117 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372137 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372172 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372198 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.373574 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.373798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.376644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.377773 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.377857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.378709 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.380974 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.392546 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.464088 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.470640 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.498232 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.509287 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.527811 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" path="/var/lib/kubelet/pods/137b9b9c-ff0c-461b-9731-8322ae411e99/volumes" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.528416 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81de5920-673a-4656-812a-cd9418a924ad" path="/var/lib/kubelet/pods/81de5920-673a-4656-812a-cd9418a924ad/volumes" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.528912 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" path="/var/lib/kubelet/pods/bb374cf7-1f64-4981-8500-45743b6c245d/volumes" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.530549 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.530610 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.537814 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.996697 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:13 crc kubenswrapper[4995]: W0126 23:29:13.010567 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b175699_64e9_4d8e_a89b_6a80468dd954.slice/crio-59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a WatchSource:0}: Error finding container 59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a: Status 404 returned error can't find the container with id 59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.014383 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.151692 4995 generic.go:334] "Generic (PLEG): container finished" podID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" exitCode=0 Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.151783 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerDied","Data":"b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432"} Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.153791 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a"} Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.173155 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287318 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287378 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287502 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287570 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.288361 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs" (OuterVolumeSpecName: "logs") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.289077 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.295728 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9" (OuterVolumeSpecName: "kube-api-access-wb9q9") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "kube-api-access-wb9q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.313568 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.347217 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data" (OuterVolumeSpecName: "config-data") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.390684 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.390730 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.390745 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.161489 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff"} Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.163832 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerDied","Data":"1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb"} Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.163865 4995 scope.go:117] "RemoveContainer" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.163967 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.208158 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.216624 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.526375 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" path="/var/lib/kubelet/pods/1d491283-0ac3-4f24-88c1-6a380d594919/volumes" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.527163 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" path="/var/lib/kubelet/pods/26594adb-ad3b-4555-a2a2-085ac874b80f/volumes" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.527666 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" path="/var/lib/kubelet/pods/754307e8-af63-4e45-8bbe-b4daf4ba4e1e/volumes" Jan 26 23:29:15 crc kubenswrapper[4995]: I0126 23:29:15.180092 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32"} Jan 26 23:29:16 crc kubenswrapper[4995]: I0126 23:29:16.197205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847"} Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.207487 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d"} Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.207884 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.244145 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.416647781 podStartE2EDuration="5.244121981s" podCreationTimestamp="2026-01-26 23:29:12 +0000 UTC" firstStartedPulling="2026-01-26 23:29:13.014049364 +0000 UTC m=+1257.178756839" lastFinishedPulling="2026-01-26 23:29:16.841523534 +0000 UTC m=+1261.006231039" observedRunningTime="2026-01-26 23:29:17.241543246 +0000 UTC m=+1261.406250731" watchObservedRunningTime="2026-01-26 23:29:17.244121981 +0000 UTC m=+1261.408829446" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.660581 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:17 crc kubenswrapper[4995]: E0126 23:29:17.660993 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.661013 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.661226 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.661902 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.677256 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.765915 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.767095 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.769378 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.783771 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.797634 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.797718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899052 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899147 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899176 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899212 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.900045 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.918623 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.976091 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.001430 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.001511 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.002551 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.034328 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.080715 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.685608 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:18 crc kubenswrapper[4995]: W0126 23:29:18.690803 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0710d60_452a_4ffb_80e7_cf4b95c4b93c.slice/crio-68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473 WatchSource:0}: Error finding container 68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473: Status 404 returned error can't find the container with id 68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473 Jan 26 23:29:18 crc kubenswrapper[4995]: W0126 23:29:18.769617 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb61ff94_84e4_46ff_affd_1d1fd691a219.slice/crio-28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f WatchSource:0}: Error finding container 28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f: Status 404 returned error can't find the container with id 28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.770574 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.244719 4995 generic.go:334] "Generic (PLEG): container finished" podID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerID="54026a5c7938c99685025eb0d6f422b9c6952be4668651d7bb950ada4b54c826" exitCode=0 Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.244803 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" event={"ID":"db61ff94-84e4-46ff-affd-1d1fd691a219","Type":"ContainerDied","Data":"54026a5c7938c99685025eb0d6f422b9c6952be4668651d7bb950ada4b54c826"} Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.244846 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" event={"ID":"db61ff94-84e4-46ff-affd-1d1fd691a219","Type":"ContainerStarted","Data":"28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f"} Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.248651 4995 generic.go:334] "Generic (PLEG): container finished" podID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerID="8fd006c327ce56252705ed20528a00dcfa084ed04bd5e467803791a1f4ae0733" exitCode=0 Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.248730 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6ch9m" event={"ID":"c0710d60-452a-4ffb-80e7-cf4b95c4b93c","Type":"ContainerDied","Data":"8fd006c327ce56252705ed20528a00dcfa084ed04bd5e467803791a1f4ae0733"} Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.248777 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6ch9m" event={"ID":"c0710d60-452a-4ffb-80e7-cf4b95c4b93c","Type":"ContainerStarted","Data":"68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473"} Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.735987 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.751598 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854718 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"db61ff94-84e4-46ff-affd-1d1fd691a219\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854836 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854872 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"db61ff94-84e4-46ff-affd-1d1fd691a219\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854918 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.856301 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0710d60-452a-4ffb-80e7-cf4b95c4b93c" (UID: "c0710d60-452a-4ffb-80e7-cf4b95c4b93c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.856309 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db61ff94-84e4-46ff-affd-1d1fd691a219" (UID: "db61ff94-84e4-46ff-affd-1d1fd691a219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.863374 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd" (OuterVolumeSpecName: "kube-api-access-8cvgd") pod "db61ff94-84e4-46ff-affd-1d1fd691a219" (UID: "db61ff94-84e4-46ff-affd-1d1fd691a219"). InnerVolumeSpecName "kube-api-access-8cvgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.865259 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2" (OuterVolumeSpecName: "kube-api-access-6l4b2") pod "c0710d60-452a-4ffb-80e7-cf4b95c4b93c" (UID: "c0710d60-452a-4ffb-80e7-cf4b95c4b93c"). InnerVolumeSpecName "kube-api-access-6l4b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957189 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957244 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957267 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957290 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.268029 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.268074 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6ch9m" event={"ID":"c0710d60-452a-4ffb-80e7-cf4b95c4b93c","Type":"ContainerDied","Data":"68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473"} Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.268671 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.270903 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" event={"ID":"db61ff94-84e4-46ff-affd-1d1fd691a219","Type":"ContainerDied","Data":"28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f"} Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.271312 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.270986 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103094 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:23 crc kubenswrapper[4995]: E0126 23:29:23.103448 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerName="mariadb-database-create" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103461 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerName="mariadb-database-create" Jan 26 23:29:23 crc kubenswrapper[4995]: E0126 23:29:23.103475 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerName="mariadb-account-create-update" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103481 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerName="mariadb-account-create-update" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103630 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerName="mariadb-account-create-update" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103648 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerName="mariadb-database-create" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.104190 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.106556 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.106777 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6ndh2" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.118685 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.209286 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.209335 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.209734 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.210049 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311721 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311773 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311834 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311890 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.317671 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.317955 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.328024 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.330568 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.425758 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.916378 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:24 crc kubenswrapper[4995]: I0126 23:29:24.297424 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerStarted","Data":"fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9"} Jan 26 23:29:24 crc kubenswrapper[4995]: I0126 23:29:24.297788 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerStarted","Data":"4955a65b2e034b25c1ec838fdd45111ea48a617f4dccc4e382e42571a24a1c90"} Jan 26 23:29:24 crc kubenswrapper[4995]: I0126 23:29:24.320168 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" podStartSLOduration=1.320142674 podStartE2EDuration="1.320142674s" podCreationTimestamp="2026-01-26 23:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:24.313898478 +0000 UTC m=+1268.478605963" watchObservedRunningTime="2026-01-26 23:29:24.320142674 +0000 UTC m=+1268.484850149" Jan 26 23:29:26 crc kubenswrapper[4995]: I0126 23:29:26.313414 4995 generic.go:334] "Generic (PLEG): container finished" podID="fe82d30b-18d6-486f-9494-034434237785" containerID="fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9" exitCode=0 Jan 26 23:29:26 crc kubenswrapper[4995]: I0126 23:29:26.313498 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerDied","Data":"fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9"} Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.692950 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.889432 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.890076 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.890189 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.890333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.895345 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.899513 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q" (OuterVolumeSpecName: "kube-api-access-2xq7q") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "kube-api-access-2xq7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.925526 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.948489 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data" (OuterVolumeSpecName: "config-data") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992719 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992768 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992790 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992807 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.332680 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerDied","Data":"4955a65b2e034b25c1ec838fdd45111ea48a617f4dccc4e382e42571a24a1c90"} Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.332778 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4955a65b2e034b25c1ec838fdd45111ea48a617f4dccc4e382e42571a24a1c90" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.333197 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.714713 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: E0126 23:29:28.715083 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe82d30b-18d6-486f-9494-034434237785" containerName="watcher-kuttl-db-sync" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.715098 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe82d30b-18d6-486f-9494-034434237785" containerName="watcher-kuttl-db-sync" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.715295 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe82d30b-18d6-486f-9494-034434237785" containerName="watcher-kuttl-db-sync" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.716075 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.718229 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.718458 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.719079 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.719125 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6ndh2" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.722915 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.724028 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.726359 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777615 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777670 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777722 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777752 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777772 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777806 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.778181 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.792565 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.816152 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.817446 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.819679 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.832298 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.878902 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.878951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.878993 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879047 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879061 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879082 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879143 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879169 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879188 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879219 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879235 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879251 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879274 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879296 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.882849 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.885710 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.886050 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.886458 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.886980 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.897412 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.901405 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980374 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980458 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980483 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980502 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980546 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980580 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980634 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980664 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.981167 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.982761 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.986946 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.987146 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.987168 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.987408 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.997640 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.003226 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.003965 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.074051 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.074641 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.131381 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.544490 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:29 crc kubenswrapper[4995]: W0126 23:29:29.548456 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bb37bcc_61c1_4154_8ee5_991a34693b5d.slice/crio-078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9 WatchSource:0}: Error finding container 078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9: Status 404 returned error can't find the container with id 078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9 Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.553399 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:29 crc kubenswrapper[4995]: W0126 23:29:29.553900 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7295582_a245_4bd4_928f_8cbaa456efc7.slice/crio-190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276 WatchSource:0}: Error finding container 190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276: Status 404 returned error can't find the container with id 190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276 Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.724630 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.349169 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerStarted","Data":"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.349568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerStarted","Data":"dbdc29f4e59ea5432ac8acd8aac8655730d2e92783170e75a0e2ef756183ec9c"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351818 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerStarted","Data":"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351866 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerStarted","Data":"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351883 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerStarted","Data":"190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351904 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.354174 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerStarted","Data":"a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.354224 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerStarted","Data":"078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.369656 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.369634811 podStartE2EDuration="2.369634811s" podCreationTimestamp="2026-01-26 23:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:30.363292833 +0000 UTC m=+1274.528000298" watchObservedRunningTime="2026-01-26 23:29:30.369634811 +0000 UTC m=+1274.534342276" Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.386521 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.386501722 podStartE2EDuration="2.386501722s" podCreationTimestamp="2026-01-26 23:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:30.381251191 +0000 UTC m=+1274.545958656" watchObservedRunningTime="2026-01-26 23:29:30.386501722 +0000 UTC m=+1274.551209187" Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.411159 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.411135616 podStartE2EDuration="2.411135616s" podCreationTimestamp="2026-01-26 23:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:30.403055945 +0000 UTC m=+1274.567763410" watchObservedRunningTime="2026-01-26 23:29:30.411135616 +0000 UTC m=+1274.575843091" Jan 26 23:29:32 crc kubenswrapper[4995]: I0126 23:29:32.424143 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:34 crc kubenswrapper[4995]: I0126 23:29:34.075913 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:34 crc kubenswrapper[4995]: I0126 23:29:34.077390 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.075499 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.076164 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.093625 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.110262 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.132885 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.170710 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.437946 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.466390 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.476532 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.489384 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.558484 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.558854 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" containerID="cri-o://eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.558969 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" containerID="cri-o://9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.559001 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" containerID="cri-o://f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.559209 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" containerID="cri-o://bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.572176 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.137:3000/\": EOF" Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.461902 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" exitCode=0 Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462152 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" exitCode=2 Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462160 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" exitCode=0 Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.461967 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d"} Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462186 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847"} Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff"} Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.538995 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.137:3000/\": dial tcp 10.217.0.137:3000: connect: connection refused" Jan 26 23:29:43 crc kubenswrapper[4995]: I0126 23:29:43.697442 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:43 crc kubenswrapper[4995]: I0126 23:29:43.697910 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" containerID="cri-o://f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" gracePeriod=30 Jan 26 23:29:43 crc kubenswrapper[4995]: I0126 23:29:43.698020 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" containerID="cri-o://063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" gracePeriod=30 Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.504908 4995 generic.go:334] "Generic (PLEG): container finished" podID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" exitCode=143 Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.504970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerDied","Data":"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420"} Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.564682 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.141:9322/\": read tcp 10.217.0.2:53466->10.217.0.141:9322: read: connection reset by peer" Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.564765 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.141:9322/\": read tcp 10.217.0.2:53464->10.217.0.141:9322: read: connection reset by peer" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.010993 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092117 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092254 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092286 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092479 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092522 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092584 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092637 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092991 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs" (OuterVolumeSpecName: "logs") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.093272 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.103996 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr" (OuterVolumeSpecName: "kube-api-access-b7vdr") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "kube-api-access-b7vdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.137943 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.186748 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.194831 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.194868 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.194880 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.203402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data" (OuterVolumeSpecName: "config-data") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.209669 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.210159 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.296597 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.296634 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.296674 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580250 4995 generic.go:334] "Generic (PLEG): container finished" podID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" exitCode=0 Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerDied","Data":"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6"} Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580329 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580346 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerDied","Data":"190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276"} Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580366 4995 scope.go:117] "RemoveContainer" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.638878 4995 scope.go:117] "RemoveContainer" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.670219 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.678616 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.703926 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.704683 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.704709 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.704735 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.704745 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.705017 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.705047 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.705278 4995 scope.go:117] "RemoveContainer" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.706366 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.707165 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6\": container with ID starting with 063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6 not found: ID does not exist" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.707216 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6"} err="failed to get container status \"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6\": rpc error: code = NotFound desc = could not find container \"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6\": container with ID starting with 063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6 not found: ID does not exist" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.707251 4995 scope.go:117] "RemoveContainer" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.711831 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.712076 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.712217 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714157 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714234 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714263 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714306 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714340 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.718396 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.718475 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.743469 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420\": container with ID starting with f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420 not found: ID does not exist" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.743530 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420"} err="failed to get container status \"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420\": rpc error: code = NotFound desc = could not find container \"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420\": container with ID starting with f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420 not found: ID does not exist" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.773974 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827490 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827536 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827560 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827583 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827629 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827649 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.828728 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838258 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838293 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838820 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838873 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.848770 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.880830 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.920843 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.069562 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7295582_a245_4bd4_928f_8cbaa456efc7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b175699_64e9_4d8e_a89b_6a80468dd954.slice/crio-conmon-f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32.scope\": RecentStats: unable to find data in memory cache]" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.484857 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.550928 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" path="/var/lib/kubelet/pods/d7295582-a245-4bd4-928f-8cbaa456efc7/volumes" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.566057 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.609933 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerStarted","Data":"b1ed431fa560523554c77fc1ace70d32844eb1774bdcc487dd885dc3a028b347"} Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613436 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" exitCode=0 Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613477 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32"} Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613500 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a"} Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613524 4995 scope.go:117] "RemoveContainer" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613642 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.632868 4995 scope.go:117] "RemoveContainer" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.648590 4995 scope.go:117] "RemoveContainer" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.667768 4995 scope.go:117] "RemoveContainer" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.701147 4995 scope.go:117] "RemoveContainer" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.701986 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d\": container with ID starting with bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d not found: ID does not exist" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702059 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d"} err="failed to get container status \"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d\": rpc error: code = NotFound desc = could not find container \"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d\": container with ID starting with bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702119 4995 scope.go:117] "RemoveContainer" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.702740 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847\": container with ID starting with 9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847 not found: ID does not exist" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702805 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847"} err="failed to get container status \"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847\": rpc error: code = NotFound desc = could not find container \"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847\": container with ID starting with 9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847 not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702837 4995 scope.go:117] "RemoveContainer" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.703121 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32\": container with ID starting with f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32 not found: ID does not exist" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.703148 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32"} err="failed to get container status \"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32\": rpc error: code = NotFound desc = could not find container \"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32\": container with ID starting with f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32 not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.703160 4995 scope.go:117] "RemoveContainer" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.703374 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff\": container with ID starting with eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff not found: ID does not exist" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.703398 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff"} err="failed to get container status \"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff\": rpc error: code = NotFound desc = could not find container \"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff\": container with ID starting with eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.750430 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.751257 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753647 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753869 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753934 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753983 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754021 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754073 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754123 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754268 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts" (OuterVolumeSpecName: "scripts") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754827 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754982 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.755004 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.755017 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.757221 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj" (OuterVolumeSpecName: "kube-api-access-64vhj") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "kube-api-access-64vhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.781277 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.802039 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.825737 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856667 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856713 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856728 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856739 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.872129 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data" (OuterVolumeSpecName: "config-data") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.960587 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.968091 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.984287 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993403 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993847 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993869 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993889 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993897 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993910 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993921 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993943 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993950 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994180 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994204 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994215 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994225 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.996014 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.998492 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.000699 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.001029 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.017916 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062398 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062536 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062756 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062941 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.063045 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.063275 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164613 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164675 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164702 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164750 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164840 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164862 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164901 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.165118 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.165287 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.169160 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.171409 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.171977 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.174636 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.188448 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.188727 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.312219 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.440087 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.462750 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.505933 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.511705 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.538528 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.562490 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.563952 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" containerID="cri-o://29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" gracePeriod=30 Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.573145 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.573315 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.625808 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.645219 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerStarted","Data":"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6"} Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.645268 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerStarted","Data":"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7"} Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.645666 4995 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-6ndh2\" not found" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.646494 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.663905 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.664324 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" containerID="cri-o://a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" gracePeriod=30 Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.673850 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.674689 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.675257 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: E0126 23:29:47.675375 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:47 crc kubenswrapper[4995]: E0126 23:29:47.675419 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:48.175402557 +0000 UTC m=+1292.340110022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.686205 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.686179216 podStartE2EDuration="2.686179216s" podCreationTimestamp="2026-01-26 23:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:47.683813867 +0000 UTC m=+1291.848521322" watchObservedRunningTime="2026-01-26 23:29:47.686179216 +0000 UTC m=+1291.850886681" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.705179 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.860554 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.904415 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: W0126 23:29:47.907898 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5779d6d0_6f61_467c_b521_a16e0201f7ed.slice/crio-15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83 WatchSource:0}: Error finding container 15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83: Status 404 returned error can't find the container with id 15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83 Jan 26 23:29:48 crc kubenswrapper[4995]: E0126 23:29:48.183076 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:48 crc kubenswrapper[4995]: E0126 23:29:48.183518 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:49.183501816 +0000 UTC m=+1293.348209281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.308353 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.526752 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" path="/var/lib/kubelet/pods/4b175699-64e9-4d8e-a89b-6a80468dd954/volumes" Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.528043 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe82d30b-18d6-486f-9494-034434237785" path="/var/lib/kubelet/pods/fe82d30b-18d6-486f-9494-034434237785/volumes" Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.732926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.732981 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.739001 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerStarted","Data":"628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.739034 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" containerID="cri-o://46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" gracePeriod=30 Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.739090 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerStarted","Data":"9b010aec3dd4bdbe6aad29d1ac3dd99a9876c86815c99c09402282e05b400799"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.742156 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" containerID="cri-o://1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" gracePeriod=30 Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.746462 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": EOF" Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.768679 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" podStartSLOduration=1.768659638 podStartE2EDuration="1.768659638s" podCreationTimestamp="2026-01-26 23:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:48.763581282 +0000 UTC m=+1292.928288747" watchObservedRunningTime="2026-01-26 23:29:48.768659638 +0000 UTC m=+1292.933367103" Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.080174 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.090257 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.093177 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.093215 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.216487 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.216561 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:51.216545975 +0000 UTC m=+1295.381253440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.746308 4995 generic.go:334] "Generic (PLEG): container finished" podID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" exitCode=143 Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.746381 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerDied","Data":"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7"} Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.747793 4995 generic.go:334] "Generic (PLEG): container finished" podID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerID="628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220" exitCode=0 Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.747848 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerDied","Data":"628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220"} Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.749551 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8"} Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.068623 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.758238 4995 generic.go:334] "Generic (PLEG): container finished" podID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" exitCode=0 Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.758328 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerDied","Data":"a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044"} Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.760564 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e"} Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.921625 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.231360 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.240870 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:51 crc kubenswrapper[4995]: E0126 23:29:51.271645 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:51 crc kubenswrapper[4995]: E0126 23:29:51.271718 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:55.271701848 +0000 UTC m=+1299.436409313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372501 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372624 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372647 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372728 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372757 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372809 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.373600 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs" (OuterVolumeSpecName: "logs") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.373981 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "969a304d-b02f-40b9-b439-9f3f5b88ccfa" (UID: "969a304d-b02f-40b9-b439-9f3f5b88ccfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.377291 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6" (OuterVolumeSpecName: "kube-api-access-w2pl6") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "kube-api-access-w2pl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.377794 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8" (OuterVolumeSpecName: "kube-api-access-2grq8") pod "969a304d-b02f-40b9-b439-9f3f5b88ccfa" (UID: "969a304d-b02f-40b9-b439-9f3f5b88ccfa"). InnerVolumeSpecName "kube-api-access-2grq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.404283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.428312 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data" (OuterVolumeSpecName: "config-data") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475616 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475658 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475674 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475687 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475701 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475713 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.768550 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerDied","Data":"9b010aec3dd4bdbe6aad29d1ac3dd99a9876c86815c99c09402282e05b400799"} Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.768587 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b010aec3dd4bdbe6aad29d1ac3dd99a9876c86815c99c09402282e05b400799" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.768641 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.773188 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerDied","Data":"078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9"} Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.773232 4995 scope.go:117] "RemoveContainer" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.773194 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776517 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9"} Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776670 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" containerID="cri-o://13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776886 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776930 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" containerID="cri-o://1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776967 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" containerID="cri-o://21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776996 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" containerID="cri-o://4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.813610 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.80277482 podStartE2EDuration="5.813592041s" podCreationTimestamp="2026-01-26 23:29:46 +0000 UTC" firstStartedPulling="2026-01-26 23:29:47.91074641 +0000 UTC m=+1292.075453875" lastFinishedPulling="2026-01-26 23:29:50.921563631 +0000 UTC m=+1295.086271096" observedRunningTime="2026-01-26 23:29:51.811038667 +0000 UTC m=+1295.975746132" watchObservedRunningTime="2026-01-26 23:29:51.813592041 +0000 UTC m=+1295.978299506" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.856575 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.863957 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.949363 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": read tcp 10.217.0.2:48712->10.217.0.144:9322: read: connection reset by peer" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.949887 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": dial tcp 10.217.0.144:9322: connect: connection refused" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.154460 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.291929 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292001 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292054 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292147 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292243 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.296498 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs" (OuterVolumeSpecName: "logs") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.308373 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668" (OuterVolumeSpecName: "kube-api-access-jv668") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "kube-api-access-jv668". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.381322 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.393550 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.393583 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.393594 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.398882 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.462254 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data" (OuterVolumeSpecName: "config-data") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.495478 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.495829 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.507902 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.527860 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" path="/var/lib/kubelet/pods/2bb37bcc-61c1-4154-8ee5-991a34693b5d/volumes" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.605821 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.621833 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.647173 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.655710 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.666839 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.673209 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702409 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702455 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702596 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702618 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702642 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702713 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.705624 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs" (OuterVolumeSpecName: "logs") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.725949 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25" (OuterVolumeSpecName: "kube-api-access-jbx25") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "kube-api-access-jbx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.726086 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.727468 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.743832 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data" (OuterVolumeSpecName: "config-data") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.744921 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.763656 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792839 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792875 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" exitCode=2 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792883 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792921 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792953 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792963 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794571 4995 generic.go:334] "Generic (PLEG): container finished" podID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794646 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794674 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerDied","Data":"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794730 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerDied","Data":"dbdc29f4e59ea5432ac8acd8aac8655730d2e92783170e75a0e2ef756183ec9c"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794746 4995 scope.go:117] "RemoveContainer" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796315 4995 generic.go:334] "Generic (PLEG): container finished" podID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796349 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerDied","Data":"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796405 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerDied","Data":"b1ed431fa560523554c77fc1ace70d32844eb1774bdcc487dd885dc3a028b347"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796539 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.805598 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806422 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806446 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806459 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806471 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806483 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806495 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.826352 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.829607 4995 scope.go:117] "RemoveContainer" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" Jan 26 23:29:52 crc kubenswrapper[4995]: E0126 23:29:52.830030 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653\": container with ID starting with 29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653 not found: ID does not exist" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.830068 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653"} err="failed to get container status \"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653\": rpc error: code = NotFound desc = could not find container \"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653\": container with ID starting with 29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653 not found: ID does not exist" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.830093 4995 scope.go:117] "RemoveContainer" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.832606 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.847391 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.853111 4995 scope.go:117] "RemoveContainer" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.854022 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872054 4995 scope.go:117] "RemoveContainer" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" Jan 26 23:29:52 crc kubenswrapper[4995]: E0126 23:29:52.872519 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6\": container with ID starting with 46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6 not found: ID does not exist" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872561 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6"} err="failed to get container status \"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6\": rpc error: code = NotFound desc = could not find container \"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6\": container with ID starting with 46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6 not found: ID does not exist" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872589 4995 scope.go:117] "RemoveContainer" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" Jan 26 23:29:52 crc kubenswrapper[4995]: E0126 23:29:52.872915 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7\": container with ID starting with 1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7 not found: ID does not exist" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872944 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7"} err="failed to get container status \"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7\": rpc error: code = NotFound desc = could not find container \"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7\": container with ID starting with 1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.791340 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809725 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" exitCode=0 Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809808 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858"} Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809845 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83"} Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809850 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809866 4995 scope.go:117] "RemoveContainer" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.814942 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815267 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815284 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815297 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815304 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815316 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815322 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815336 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815341 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815349 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerName="mariadb-account-delete" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815355 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerName="mariadb-account-delete" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815366 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815373 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815389 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815395 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815409 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815415 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815426 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815432 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815577 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815585 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815595 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815606 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815615 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerName="mariadb-account-delete" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815622 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815631 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815640 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815649 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.821131 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.825164 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.825250 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.837437 4995 scope.go:117] "RemoveContainer" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.839010 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.869595 4995 scope.go:117] "RemoveContainer" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.907139 4995 scope.go:117] "RemoveContainer" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.924300 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.925606 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926042 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926138 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926165 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926585 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926691 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926790 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926812 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926865 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.927318 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.927376 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.928609 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.929543 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.930284 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.931452 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.933347 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54" (OuterVolumeSpecName: "kube-api-access-pkf54") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "kube-api-access-pkf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.951666 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts" (OuterVolumeSpecName: "scripts") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.960740 4995 scope.go:117] "RemoveContainer" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.960840 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.960928 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.961310 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9\": container with ID starting with 1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9 not found: ID does not exist" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961339 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9"} err="failed to get container status \"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9\": rpc error: code = NotFound desc = could not find container \"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9\": container with ID starting with 1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961359 4995 scope.go:117] "RemoveContainer" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.961726 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e\": container with ID starting with 21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e not found: ID does not exist" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961778 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e"} err="failed to get container status \"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e\": rpc error: code = NotFound desc = could not find container \"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e\": container with ID starting with 21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961808 4995 scope.go:117] "RemoveContainer" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.968860 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8\": container with ID starting with 4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8 not found: ID does not exist" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.968902 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8"} err="failed to get container status \"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8\": rpc error: code = NotFound desc = could not find container \"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8\": container with ID starting with 4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.968926 4995 scope.go:117] "RemoveContainer" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.969790 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858\": container with ID starting with 13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858 not found: ID does not exist" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.969896 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858"} err="failed to get container status \"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858\": rpc error: code = NotFound desc = could not find container \"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858\": container with ID starting with 13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.981542 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.015992 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.029941 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030022 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030074 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030086 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030096 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030118 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030126 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030135 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.063863 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.070160 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data" (OuterVolumeSpecName: "config-data") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131513 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131712 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131792 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131809 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.132415 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.139522 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.142785 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.151728 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.159784 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.180939 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.183216 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.186227 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.186499 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.189986 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.191143 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234468 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234508 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234580 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234626 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234699 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234726 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234754 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.338661 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339094 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339153 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339185 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339245 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339276 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.340081 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.341273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.344696 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.344875 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.346708 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.348827 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.349084 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.357477 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.421345 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.503047 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.560893 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" path="/var/lib/kubelet/pods/08c10a90-cf36-46d8-9d0a-8152c08eccf9/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.561971 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" path="/var/lib/kubelet/pods/5779d6d0-6f61-467c-b521-a16e0201f7ed/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.562909 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" path="/var/lib/kubelet/pods/969a304d-b02f-40b9-b439-9f3f5b88ccfa/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.563892 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" path="/var/lib/kubelet/pods/c0710d60-452a-4ffb-80e7-cf4b95c4b93c/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.564464 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" path="/var/lib/kubelet/pods/db61ff94-84e4-46ff-affd-1d1fd691a219/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.565056 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" path="/var/lib/kubelet/pods/dc0299c2-2a71-4542-bc23-10e088bfec0d/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.676690 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.822661 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8r7vh" event={"ID":"360b1483-8046-4c4c-920d-69387e2fbbed","Type":"ContainerStarted","Data":"833c638770cf9d272616427da3157be1474f20a0ade1400ac79962a7b73c6e8e"} Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.953839 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.060412 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:55 crc kubenswrapper[4995]: W0126 23:29:55.062802 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8b520e_94ee_43d6_bd95_d3b1b0a10649.slice/crio-aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f WatchSource:0}: Error finding container aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f: Status 404 returned error can't find the container with id aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.834011 4995 generic.go:334] "Generic (PLEG): container finished" podID="360b1483-8046-4c4c-920d-69387e2fbbed" containerID="b04176a0e27de47ec9992ca7aa97e0c6c4c8aae35383f6b313a755fda54d8e47" exitCode=0 Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.834469 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8r7vh" event={"ID":"360b1483-8046-4c4c-920d-69387e2fbbed","Type":"ContainerDied","Data":"b04176a0e27de47ec9992ca7aa97e0c6c4c8aae35383f6b313a755fda54d8e47"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.836804 4995 generic.go:334] "Generic (PLEG): container finished" podID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerID="9bcf59f8068a58a5908f7f9f490fcde236bda08e654b64f1d471d1bef1b45cfc" exitCode=0 Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.836846 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" event={"ID":"ab224b66-6f5e-4e78-bdc4-e913dcb2250a","Type":"ContainerDied","Data":"9bcf59f8068a58a5908f7f9f490fcde236bda08e654b64f1d471d1bef1b45cfc"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.836867 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" event={"ID":"ab224b66-6f5e-4e78-bdc4-e913dcb2250a","Type":"ContainerStarted","Data":"a623d35a183514c4477dab94518c449d6888d68791e57b1f6029091ee004575f"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.838680 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.838712 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f"} Jan 26 23:29:56 crc kubenswrapper[4995]: I0126 23:29:56.861078 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72"} Jan 26 23:29:56 crc kubenswrapper[4995]: I0126 23:29:56.861758 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5"} Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.300200 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.313835 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"360b1483-8046-4c4c-920d-69387e2fbbed\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.314324 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"360b1483-8046-4c4c-920d-69387e2fbbed\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.315611 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "360b1483-8046-4c4c-920d-69387e2fbbed" (UID: "360b1483-8046-4c4c-920d-69387e2fbbed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.325868 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj" (OuterVolumeSpecName: "kube-api-access-v89tj") pod "360b1483-8046-4c4c-920d-69387e2fbbed" (UID: "360b1483-8046-4c4c-920d-69387e2fbbed"). InnerVolumeSpecName "kube-api-access-v89tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.376078 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415284 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415474 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415851 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415873 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.418050 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab224b66-6f5e-4e78-bdc4-e913dcb2250a" (UID: "ab224b66-6f5e-4e78-bdc4-e913dcb2250a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.420218 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c" (OuterVolumeSpecName: "kube-api-access-bvm5c") pod "ab224b66-6f5e-4e78-bdc4-e913dcb2250a" (UID: "ab224b66-6f5e-4e78-bdc4-e913dcb2250a"). InnerVolumeSpecName "kube-api-access-bvm5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.517143 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.517183 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.873033 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8r7vh" event={"ID":"360b1483-8046-4c4c-920d-69387e2fbbed","Type":"ContainerDied","Data":"833c638770cf9d272616427da3157be1474f20a0ade1400ac79962a7b73c6e8e"} Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.873079 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="833c638770cf9d272616427da3157be1474f20a0ade1400ac79962a7b73c6e8e" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.873183 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.887197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" event={"ID":"ab224b66-6f5e-4e78-bdc4-e913dcb2250a","Type":"ContainerDied","Data":"a623d35a183514c4477dab94518c449d6888d68791e57b1f6029091ee004575f"} Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.887239 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a623d35a183514c4477dab94518c449d6888d68791e57b1f6029091ee004575f" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.887238 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:58 crc kubenswrapper[4995]: I0126 23:29:58.896651 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825"} Jan 26 23:29:58 crc kubenswrapper[4995]: I0126 23:29:58.896949 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:58 crc kubenswrapper[4995]: I0126 23:29:58.921974 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.197292262 podStartE2EDuration="4.921955346s" podCreationTimestamp="2026-01-26 23:29:54 +0000 UTC" firstStartedPulling="2026-01-26 23:29:55.064973802 +0000 UTC m=+1299.229681267" lastFinishedPulling="2026-01-26 23:29:57.789636886 +0000 UTC m=+1301.954344351" observedRunningTime="2026-01-26 23:29:58.919188396 +0000 UTC m=+1303.083895861" watchObservedRunningTime="2026-01-26 23:29:58.921955346 +0000 UTC m=+1303.086662831" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.140436 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:29:59 crc kubenswrapper[4995]: E0126 23:29:59.140865 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" containerName="mariadb-database-create" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.140891 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" containerName="mariadb-database-create" Jan 26 23:29:59 crc kubenswrapper[4995]: E0126 23:29:59.140904 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerName="mariadb-account-create-update" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.140916 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerName="mariadb-account-create-update" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.141258 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerName="mariadb-account-create-update" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.141298 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" containerName="mariadb-database-create" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.142358 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.146264 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-h5tln" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.146531 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.149809 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251555 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251631 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251924 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.352991 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.353071 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.353121 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.353142 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.358490 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.358701 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.358988 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.373989 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.460390 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.059656 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:30:00 crc kubenswrapper[4995]: W0126 23:30:00.063730 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74804f16_0037_44f0_a6a5_71414a33cee2.slice/crio-b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede WatchSource:0}: Error finding container b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede: Status 404 returned error can't find the container with id b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.132555 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p"] Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.134051 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.137879 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.138237 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.142569 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p"] Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.166147 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.166204 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.166272 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.267284 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.267344 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.267388 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.268272 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.272284 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.287879 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.485332 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.913873 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerStarted","Data":"5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279"} Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.914114 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerStarted","Data":"b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede"} Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.932727 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" podStartSLOduration=1.9327120089999998 podStartE2EDuration="1.932712009s" podCreationTimestamp="2026-01-26 23:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:00.929240833 +0000 UTC m=+1305.093948318" watchObservedRunningTime="2026-01-26 23:30:00.932712009 +0000 UTC m=+1305.097419474" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.993835 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p"] Jan 26 23:30:01 crc kubenswrapper[4995]: W0126 23:30:01.002642 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54656312_1776_448a_aed7_759b65eb3763.slice/crio-301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14 WatchSource:0}: Error finding container 301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14: Status 404 returned error can't find the container with id 301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14 Jan 26 23:30:01 crc kubenswrapper[4995]: I0126 23:30:01.921890 4995 generic.go:334] "Generic (PLEG): container finished" podID="54656312-1776-448a-aed7-759b65eb3763" containerID="742b3454037bbd44149a6c25e12eb0286e362e0941f889c7b4b09e41324862da" exitCode=0 Jan 26 23:30:01 crc kubenswrapper[4995]: I0126 23:30:01.922012 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" event={"ID":"54656312-1776-448a-aed7-759b65eb3763","Type":"ContainerDied","Data":"742b3454037bbd44149a6c25e12eb0286e362e0941f889c7b4b09e41324862da"} Jan 26 23:30:01 crc kubenswrapper[4995]: I0126 23:30:01.922474 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" event={"ID":"54656312-1776-448a-aed7-759b65eb3763","Type":"ContainerStarted","Data":"301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14"} Jan 26 23:30:02 crc kubenswrapper[4995]: I0126 23:30:02.932399 4995 generic.go:334] "Generic (PLEG): container finished" podID="74804f16-0037-44f0-a6a5-71414a33cee2" containerID="5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279" exitCode=0 Jan 26 23:30:02 crc kubenswrapper[4995]: I0126 23:30:02.932470 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerDied","Data":"5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279"} Jan 26 23:30:03 crc kubenswrapper[4995]: E0126 23:30:03.231258 4995 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:48942->38.102.83.164:42819: write tcp 38.102.83.164:48942->38.102.83.164:42819: write: broken pipe Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.323308 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.425520 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"54656312-1776-448a-aed7-759b65eb3763\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.425577 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"54656312-1776-448a-aed7-759b65eb3763\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.425676 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"54656312-1776-448a-aed7-759b65eb3763\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.427007 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume" (OuterVolumeSpecName: "config-volume") pod "54656312-1776-448a-aed7-759b65eb3763" (UID: "54656312-1776-448a-aed7-759b65eb3763"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.436280 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch" (OuterVolumeSpecName: "kube-api-access-k6nch") pod "54656312-1776-448a-aed7-759b65eb3763" (UID: "54656312-1776-448a-aed7-759b65eb3763"). InnerVolumeSpecName "kube-api-access-k6nch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.442367 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54656312-1776-448a-aed7-759b65eb3763" (UID: "54656312-1776-448a-aed7-759b65eb3763"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.538308 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.538347 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.538357 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.943648 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.943644 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" event={"ID":"54656312-1776-448a-aed7-759b65eb3763","Type":"ContainerDied","Data":"301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14"} Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.944147 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.313343 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353611 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353682 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353873 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353998 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.382855 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2" (OuterVolumeSpecName: "kube-api-access-xxbf2") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "kube-api-access-xxbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.383166 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.414677 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data" (OuterVolumeSpecName: "config-data") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.416965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456275 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456304 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456315 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456325 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.972816 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerDied","Data":"b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede"} Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.973076 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.973188 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243321 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: E0126 23:30:05.243640 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" containerName="watcher-kuttl-db-sync" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243652 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" containerName="watcher-kuttl-db-sync" Jan 26 23:30:05 crc kubenswrapper[4995]: E0126 23:30:05.243666 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54656312-1776-448a-aed7-759b65eb3763" containerName="collect-profiles" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243672 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="54656312-1776-448a-aed7-759b65eb3763" containerName="collect-profiles" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243824 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" containerName="watcher-kuttl-db-sync" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243834 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="54656312-1776-448a-aed7-759b65eb3763" containerName="collect-profiles" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.244761 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.262953 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.263037 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-h5tln" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.264040 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.264126 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.269770 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.277155 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.278273 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.289875 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.306295 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.355075 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.356401 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.358772 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.366470 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374402 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374441 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374502 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374520 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374891 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374945 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374976 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476249 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476315 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476382 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476408 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476461 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476498 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476531 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476557 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476595 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476619 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476654 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476689 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476724 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476749 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476788 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476823 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.477196 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.480594 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.485741 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.485889 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.485951 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.486758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.498124 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.577728 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.578819 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.578866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.578912 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579036 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579506 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579593 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579675 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579874 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.580137 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.581711 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.582931 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.582995 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.583770 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.584251 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.589531 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.611919 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.613187 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.674184 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.905384 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:06 crc kubenswrapper[4995]: I0126 23:30:06.075488 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:06 crc kubenswrapper[4995]: I0126 23:30:06.176626 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:06 crc kubenswrapper[4995]: I0126 23:30:06.365895 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:06 crc kubenswrapper[4995]: W0126 23:30:06.381287 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19b6df5_abba_4eeb_9103_ac018449be94.slice/crio-57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620 WatchSource:0}: Error finding container 57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620: Status 404 returned error can't find the container with id 57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620 Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000028 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerStarted","Data":"d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000425 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerStarted","Data":"b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerStarted","Data":"54980638d9727bc6af52a006a8f0a0d24420ad4add393daec95332d4aba13d66"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000469 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.002322 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerStarted","Data":"3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.002406 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerStarted","Data":"cba3ab62d7d62a0d684ffabbf01be7f833800d0b23faa4d8fc8f45160ef60210"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.004433 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerStarted","Data":"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.005069 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerStarted","Data":"57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.034828 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.034810155 podStartE2EDuration="2.034810155s" podCreationTimestamp="2026-01-26 23:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:07.028127328 +0000 UTC m=+1311.192834793" watchObservedRunningTime="2026-01-26 23:30:07.034810155 +0000 UTC m=+1311.199517610" Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.048564 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.048547778 podStartE2EDuration="2.048547778s" podCreationTimestamp="2026-01-26 23:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:07.048130408 +0000 UTC m=+1311.212837873" watchObservedRunningTime="2026-01-26 23:30:07.048547778 +0000 UTC m=+1311.213255243" Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.069262 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.069243536 podStartE2EDuration="2.069243536s" podCreationTimestamp="2026-01-26 23:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:07.065938303 +0000 UTC m=+1311.230645758" watchObservedRunningTime="2026-01-26 23:30:07.069243536 +0000 UTC m=+1311.233951001" Jan 26 23:30:09 crc kubenswrapper[4995]: I0126 23:30:09.562376 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:10 crc kubenswrapper[4995]: I0126 23:30:10.590355 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:10 crc kubenswrapper[4995]: I0126 23:30:10.907225 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.590858 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.608595 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.675368 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.707252 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.907262 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.955767 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.087127 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.094872 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.116199 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.122594 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.259374 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.259934 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" containerID="cri-o://8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.260696 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" containerID="cri-o://41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.260977 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" containerID="cri-o://f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.261021 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" containerID="cri-o://94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.276596 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.149:3000/\": EOF" Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112481 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825" exitCode=0 Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112542 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72" exitCode=2 Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112563 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320" exitCode=0 Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825"} Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112642 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72"} Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112669 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320"} Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.162352 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5" exitCode=0 Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.162594 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5"} Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.293674 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412560 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412678 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412752 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412785 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412810 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412831 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412872 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412941 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.413465 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.414268 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.418870 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq" (OuterVolumeSpecName: "kube-api-access-9n7pq") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "kube-api-access-9n7pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.421464 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts" (OuterVolumeSpecName: "scripts") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.446955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.475427 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.482182 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515038 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515087 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515126 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515142 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515159 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515171 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515182 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.529296 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data" (OuterVolumeSpecName: "config-data") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.617596 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.172990 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f"} Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.173069 4995 scope.go:117] "RemoveContainer" containerID="41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.174152 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.193305 4995 scope.go:117] "RemoveContainer" containerID="f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.215699 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.224915 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.228650 4995 scope.go:117] "RemoveContainer" containerID="94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.235847 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236185 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236201 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236212 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236219 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236231 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236237 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236258 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236264 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236403 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236413 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236423 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236435 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.238574 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.243491 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.243807 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.244010 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.251120 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.269647 4995 scope.go:117] "RemoveContainer" containerID="8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429488 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429599 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429678 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429711 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429728 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429745 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429779 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.530620 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531048 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531726 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531796 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531882 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531902 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531921 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531975 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.533199 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.535661 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.535866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.536202 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" path="/var/lib/kubelet/pods/7f8b520e-94ee-43d6-bd95-d3b1b0a10649/volumes" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.536539 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.537325 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.553218 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.555921 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.572546 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.083738 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.184746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"c97dd3f359f663362140f94ede7f9243c229adb056421298fec32584624f10b0"} Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.998740 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.998951 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" containerID="cri-o://3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.065627 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.065841 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" containerID="cri-o://9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.098026 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.098290 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" containerID="cri-o://3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.109051 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.109339 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" containerID="cri-o://b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.109380 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" containerID="cri-o://d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.168265 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.174641 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.193332 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.232433 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.233458 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.235975 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.236300 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.243159 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.362968 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363569 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363627 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363663 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363707 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363762 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363816 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466222 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466323 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466355 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466386 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466421 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466452 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.471892 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.472401 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.473436 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.475529 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.489716 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.490830 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.493744 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.529906 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" path="/var/lib/kubelet/pods/049184a2-2d7f-4107-8a72-197fede36e5b/volumes" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.582086 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:27 crc kubenswrapper[4995]: E0126 23:30:27.086416 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode035ba66_a2ec_4127_a799_bb9dd2d07e2f.slice/crio-conmon-d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.209032 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.248446 4995 generic.go:334] "Generic (PLEG): container finished" podID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerID="d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7" exitCode=0 Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.248797 4995 generic.go:334] "Generic (PLEG): container finished" podID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerID="b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846" exitCode=143 Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.249224 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerDied","Data":"d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7"} Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.249286 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerDied","Data":"b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846"} Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.255352 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.507348 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589455 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589717 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589770 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589798 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589878 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589906 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.591369 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs" (OuterVolumeSpecName: "logs") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.610361 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm" (OuterVolumeSpecName: "kube-api-access-7n7fm") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "kube-api-access-7n7fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.627326 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.633237 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.646325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.661288 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691517 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691556 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691567 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691576 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691584 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691593 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.697224 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data" (OuterVolumeSpecName: "config-data") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.792860 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.824740 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031233 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031345 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031512 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.032019 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs" (OuterVolumeSpecName: "logs") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.037547 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw" (OuterVolumeSpecName: "kube-api-access-nxgsw") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "kube-api-access-nxgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.072052 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data" (OuterVolumeSpecName: "config-data") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.074505 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132788 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132818 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132828 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132837 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.294827 4995 generic.go:334] "Generic (PLEG): container finished" podID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerID="3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd" exitCode=0 Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.295000 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerDied","Data":"3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.302415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerDied","Data":"54980638d9727bc6af52a006a8f0a0d24420ad4add393daec95332d4aba13d66"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.302457 4995 scope.go:117] "RemoveContainer" containerID="d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.302574 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.312989 4995 generic.go:334] "Generic (PLEG): container finished" podID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerID="3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999" exitCode=0 Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.313060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerDied","Data":"3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335807 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19b6df5-abba-4eeb-9103-ac018449be94" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" exitCode=0 Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335883 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerDied","Data":"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335883 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335907 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerDied","Data":"57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.346906 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerStarted","Data":"27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.346949 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerStarted","Data":"9c6d8281bea2660095c708e79d4acf2f75f4040b4b00019316a9d2a2e7c295bd"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.348890 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.373060 4995 scope.go:117] "RemoveContainer" containerID="b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.380153 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.402462 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.422614 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.422989 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423003 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.423023 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423031 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.423063 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423071 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423309 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423325 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423337 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423350 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" podStartSLOduration=2.423331514 podStartE2EDuration="2.423331514s" podCreationTimestamp="2026-01-26 23:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:28.383507688 +0000 UTC m=+1332.548215153" watchObservedRunningTime="2026-01-26 23:30:28.423331514 +0000 UTC m=+1332.588038979" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.426855 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.433735 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.434018 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.439139 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.442711 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.449733 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.460629 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.468048 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.471453 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.473613 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475816 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475866 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475902 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475978 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476005 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476145 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476180 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476350 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.503255 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.549597 4995 scope.go:117] "RemoveContainer" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.551429 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" path="/var/lib/kubelet/pods/a19b6df5-abba-4eeb-9103-ac018449be94/volumes" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.552013 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" path="/var/lib/kubelet/pods/e035ba66-a2ec-4127-a799-bb9dd2d07e2f/volumes" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.564182 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.568307 4995 scope.go:117] "RemoveContainer" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.568763 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58\": container with ID starting with 9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58 not found: ID does not exist" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.568794 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58"} err="failed to get container status \"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58\": rpc error: code = NotFound desc = could not find container \"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58\": container with ID starting with 9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58 not found: ID does not exist" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.569927 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580925 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580950 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580982 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581002 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581037 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581064 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581115 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581138 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581157 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581193 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581249 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581264 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.582703 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.589816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.609780 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.611112 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.617224 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.619281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.621655 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.622283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.681990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682041 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682092 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682152 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682189 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682225 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682244 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682280 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682325 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682376 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682570 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682598 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682657 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682697 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682731 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.683045 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.686034 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.686254 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs" (OuterVolumeSpecName: "logs") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.686505 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data" (OuterVolumeSpecName: "config-data") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.688205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.689622 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.691718 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg" (OuterVolumeSpecName: "kube-api-access-2qjbg") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "kube-api-access-2qjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.692383 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2" (OuterVolumeSpecName: "kube-api-access-bhnc2") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "kube-api-access-bhnc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.697686 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.705490 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.707351 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.716434 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.717704 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.725665 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.731069 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data" (OuterVolumeSpecName: "config-data") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784174 4995 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784208 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784218 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784226 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784233 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784243 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784252 4995 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784261 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784271 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784281 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.860459 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.924887 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:29 crc kubenswrapper[4995]: W0126 23:30:29.150765 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca013d92_6492_419e_b3c4_cfd440daa2bb.slice/crio-e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c WatchSource:0}: Error finding container e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c: Status 404 returned error can't find the container with id e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.152955 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.359780 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.359772 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerDied","Data":"cba3ab62d7d62a0d684ffabbf01be7f833800d0b23faa4d8fc8f45160ef60210"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.360340 4995 scope.go:117] "RemoveContainer" containerID="3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.361266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerStarted","Data":"e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.365821 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.366616 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.370048 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.373313 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerDied","Data":"1fe63fca4fd6cb5199a750cf9e863e7fdd11939b8e0ee09e81633ccef9bdd3c7"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.376796 4995 scope.go:117] "RemoveContainer" containerID="3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.404570 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.667497953 podStartE2EDuration="5.404550367s" podCreationTimestamp="2026-01-26 23:30:24 +0000 UTC" firstStartedPulling="2026-01-26 23:30:25.095902211 +0000 UTC m=+1329.260609676" lastFinishedPulling="2026-01-26 23:30:28.832954635 +0000 UTC m=+1332.997662090" observedRunningTime="2026-01-26 23:30:29.402407813 +0000 UTC m=+1333.567115268" watchObservedRunningTime="2026-01-26 23:30:29.404550367 +0000 UTC m=+1333.569257842" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.442992 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.474594 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.500803 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.524629 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: E0126 23:30:29.529394 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529469 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" Jan 26 23:30:29 crc kubenswrapper[4995]: E0126 23:30:29.529524 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529533 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529735 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529755 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559528 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559569 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559582 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559656 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.560453 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.561623 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.563869 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-zzlxj" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.564215 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.564454 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.565534 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.575941 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596527 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596686 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596771 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5bh\" (UniqueName: \"kubernetes.io/projected/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kube-api-access-bb5bh\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597044 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597136 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597176 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kolla-config\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-config-data\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597245 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597278 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597303 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698447 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-config-data\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698497 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698535 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698562 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698588 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698621 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5bh\" (UniqueName: \"kubernetes.io/projected/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kube-api-access-bb5bh\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698681 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698704 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698723 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kolla-config\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.699289 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.699434 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kolla-config\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.699502 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-config-data\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.702666 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.702843 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.709267 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.711782 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.711927 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.713015 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.713715 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.715553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5bh\" (UniqueName: \"kubernetes.io/projected/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kube-api-access-bb5bh\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.903010 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.913682 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.417165 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.422513 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerStarted","Data":"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.422573 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerStarted","Data":"c39928c36c8af1a9535983a878c5e72ae844418dbec585db7b98acb4c5ad7317"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.427464 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerStarted","Data":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.427586 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerStarted","Data":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.427640 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.439870 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.439854291 podStartE2EDuration="2.439854291s" podCreationTimestamp="2026-01-26 23:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:30.43821391 +0000 UTC m=+1334.602921375" watchObservedRunningTime="2026-01-26 23:30:30.439854291 +0000 UTC m=+1334.604561756" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.465434 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.46541337 podStartE2EDuration="2.46541337s" podCreationTimestamp="2026-01-26 23:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:30.463612725 +0000 UTC m=+1334.628320190" watchObservedRunningTime="2026-01-26 23:30:30.46541337 +0000 UTC m=+1334.630120855" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.528477 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" path="/var/lib/kubelet/pods/37ec7b7e-84e8-4a58-b676-c06ed9a0809e/volumes" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.530864 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" path="/var/lib/kubelet/pods/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991/volumes" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.535598 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.438490 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerStarted","Data":"6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.439864 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerStarted","Data":"6780d1fd068d258f993980132b0b6bd2df34b9245721daf2a80227aeaf1d0ca4"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.441190 4995 generic.go:334] "Generic (PLEG): container finished" podID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerID="27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74" exitCode=0 Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.441290 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerDied","Data":"27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.443955 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9","Type":"ContainerStarted","Data":"0954fd65e1cb05e0fa6de2a487d8062942ffb2496c99f5b616fd7a07a90b35c9"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.443994 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.444006 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9","Type":"ContainerStarted","Data":"a4cafba8cb82575ae1601987b63e67d675aee0ec038ba40ab42c92ee946dad4b"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.467795 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.467772101 podStartE2EDuration="2.467772101s" podCreationTimestamp="2026-01-26 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:31.458001487 +0000 UTC m=+1335.622708972" watchObservedRunningTime="2026-01-26 23:30:31.467772101 +0000 UTC m=+1335.632479576" Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.508023 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.508003597 podStartE2EDuration="2.508003597s" podCreationTimestamp="2026-01-26 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:31.486381996 +0000 UTC m=+1335.651089461" watchObservedRunningTime="2026-01-26 23:30:31.508003597 +0000 UTC m=+1335.672711072" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.839749 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860225 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860281 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860302 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860338 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860363 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.870759 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.877271 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.888276 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts" (OuterVolumeSpecName: "scripts") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.892929 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl" (OuterVolumeSpecName: "kube-api-access-k84vl") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "kube-api-access-k84vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.902351 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.910330 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data" (OuterVolumeSpecName: "config-data") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961637 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961680 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961984 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961997 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.962005 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.962013 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.962021 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.986751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.023417 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.063203 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.063246 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.475457 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.476189 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerDied","Data":"9c6d8281bea2660095c708e79d4acf2f75f4040b4b00019316a9d2a2e7c295bd"} Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.476241 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6d8281bea2660095c708e79d4acf2f75f4040b4b00019316a9d2a2e7c295bd" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.861399 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.925957 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.862338 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.881438 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.925483 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.950279 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.539208 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.553620 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.904015 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.915277 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.929319 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.082864 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-984bfcd89-8d4rw"] Jan 26 23:30:40 crc kubenswrapper[4995]: E0126 23:30:40.083480 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerName="keystone-bootstrap" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.083598 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerName="keystone-bootstrap" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.083878 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerName="keystone-bootstrap" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.084792 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.091290 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-984bfcd89-8d4rw"] Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.283850 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-fernet-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.283920 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-internal-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.283952 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-cert-memcached-mtls\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284037 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-scripts\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284061 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-credential-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284082 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzfx\" (UniqueName: \"kubernetes.io/projected/257ee213-d2fa-4d94-9b26-0c62b5411e44-kube-api-access-4hzfx\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284119 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-public-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284245 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-config-data\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284308 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-combined-ca-bundle\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386017 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-scripts\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386376 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-credential-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386499 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzfx\" (UniqueName: \"kubernetes.io/projected/257ee213-d2fa-4d94-9b26-0c62b5411e44-kube-api-access-4hzfx\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-public-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386804 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-config-data\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386943 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-combined-ca-bundle\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.387147 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-fernet-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.387296 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-internal-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.387461 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-cert-memcached-mtls\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.395441 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-config-data\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.395996 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-fernet-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.397400 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-internal-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.398740 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-public-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.400353 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-scripts\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.400675 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-credential-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.400987 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-cert-memcached-mtls\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.405422 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzfx\" (UniqueName: \"kubernetes.io/projected/257ee213-d2fa-4d94-9b26-0c62b5411e44-kube-api-access-4hzfx\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.408608 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-combined-ca-bundle\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.529494 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.555812 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.704941 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.173449 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-984bfcd89-8d4rw"] Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.304607 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.538872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" event={"ID":"257ee213-d2fa-4d94-9b26-0c62b5411e44","Type":"ContainerStarted","Data":"23c730fe870113fb434733985f99e79cb0778e240c1b753c124033eca5e27b4e"} Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539042 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" containerID="cri-o://7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" gracePeriod=30 Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539121 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" containerID="cri-o://7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" gracePeriod=30 Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539290 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539320 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" event={"ID":"257ee213-d2fa-4d94-9b26-0c62b5411e44","Type":"ContainerStarted","Data":"e1f8e8f179d34f122aa821ddd0d1878723a611ef37641339a791f8ed2e0c069b"} Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.570302 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" podStartSLOduration=1.570280106 podStartE2EDuration="1.570280106s" podCreationTimestamp="2026-01-26 23:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:41.559004494 +0000 UTC m=+1345.723711979" watchObservedRunningTime="2026-01-26 23:30:41.570280106 +0000 UTC m=+1345.734987581" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.415784 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520204 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520353 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520394 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520502 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520544 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520725 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520775 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.521387 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs" (OuterVolumeSpecName: "logs") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.521850 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.528394 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x" (OuterVolumeSpecName: "kube-api-access-wl66x") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "kube-api-access-wl66x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.597118 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.602415 4995 generic.go:334] "Generic (PLEG): container finished" podID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" exitCode=0 Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.602464 4995 generic.go:334] "Generic (PLEG): container finished" podID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" exitCode=143 Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.602663 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.603079 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data" (OuterVolumeSpecName: "config-data") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.605585 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.609457 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.620280 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623458 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623659 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623759 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623838 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623925 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.624001 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.631227 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669336 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerDied","Data":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669554 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerDied","Data":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669681 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerDied","Data":"e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c"} Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669610 4995 scope.go:117] "RemoveContainer" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.721886 4995 scope.go:117] "RemoveContainer" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.726371 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.742643 4995 scope.go:117] "RemoveContainer" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.743094 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": container with ID starting with 7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff not found: ID does not exist" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743156 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} err="failed to get container status \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": rpc error: code = NotFound desc = could not find container \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": container with ID starting with 7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743182 4995 scope.go:117] "RemoveContainer" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.743932 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": container with ID starting with 7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837 not found: ID does not exist" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743956 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} err="failed to get container status \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": rpc error: code = NotFound desc = could not find container \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": container with ID starting with 7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837 not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743971 4995 scope.go:117] "RemoveContainer" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.744178 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} err="failed to get container status \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": rpc error: code = NotFound desc = could not find container \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": container with ID starting with 7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.744196 4995 scope.go:117] "RemoveContainer" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.744408 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} err="failed to get container status \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": rpc error: code = NotFound desc = could not find container \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": container with ID starting with 7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837 not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.934246 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.943654 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.962462 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.962832 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.962853 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.962870 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.962879 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.963115 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.963140 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.964186 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.969975 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.979396 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.131956 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132172 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132284 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132506 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132589 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132655 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233614 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233699 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233748 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233799 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.234115 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.237803 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.238491 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.248589 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.251502 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.255353 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.285952 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.783153 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.527940 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" path="/var/lib/kubelet/pods/ca013d92-6492-419e-b3c4-cfd440daa2bb/volumes" Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.631705 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerStarted","Data":"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6"} Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.631753 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerStarted","Data":"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1"} Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.631764 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerStarted","Data":"9a7d0124cd7a5360719a3b66cdef998880be62eb0874154eac5904934bc66e9c"} Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.632031 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.651433 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.651411371 podStartE2EDuration="2.651411371s" podCreationTimestamp="2026-01-26 23:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:44.650221191 +0000 UTC m=+1348.814928656" watchObservedRunningTime="2026-01-26 23:30:44.651411371 +0000 UTC m=+1348.816118836" Jan 26 23:30:47 crc kubenswrapper[4995]: I0126 23:30:47.008806 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:48 crc kubenswrapper[4995]: I0126 23:30:48.286226 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:53 crc kubenswrapper[4995]: I0126 23:30:53.286983 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:53 crc kubenswrapper[4995]: I0126 23:30:53.291041 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:53 crc kubenswrapper[4995]: I0126 23:30:53.717715 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:54 crc kubenswrapper[4995]: I0126 23:30:54.582624 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:59 crc kubenswrapper[4995]: E0126 23:30:59.993015 4995 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:36724->38.102.83.164:42819: write tcp 38.102.83.164:36724->38.102.83.164:42819: write: broken pipe Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.156035 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.159608 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.163358 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.248027 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.248122 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.248205 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.349941 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350012 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350067 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350592 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350783 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.370014 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.518847 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.032550 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.879990 4995 generic.go:334] "Generic (PLEG): container finished" podID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" exitCode=0 Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.880040 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce"} Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.880121 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerStarted","Data":"db9b7d03b326053c44e82afb8fe738958f31d4b689aa3b9b6e0d3e7411632e71"} Jan 26 23:31:11 crc kubenswrapper[4995]: I0126 23:31:11.893774 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerStarted","Data":"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae"} Jan 26 23:31:12 crc kubenswrapper[4995]: I0126 23:31:12.279750 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:31:12 crc kubenswrapper[4995]: I0126 23:31:12.347002 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:31:12 crc kubenswrapper[4995]: I0126 23:31:12.347235 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" containerID="cri-o://d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" gracePeriod=30 Jan 26 23:31:13 crc kubenswrapper[4995]: I0126 23:31:13.912754 4995 generic.go:334] "Generic (PLEG): container finished" podID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" exitCode=0 Jan 26 23:31:13 crc kubenswrapper[4995]: I0126 23:31:13.912814 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae"} Jan 26 23:31:14 crc kubenswrapper[4995]: I0126 23:31:14.923029 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerStarted","Data":"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16"} Jan 26 23:31:14 crc kubenswrapper[4995]: I0126 23:31:14.947650 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2nqkb" podStartSLOduration=2.441530821 podStartE2EDuration="5.947627131s" podCreationTimestamp="2026-01-26 23:31:09 +0000 UTC" firstStartedPulling="2026-01-26 23:31:10.881837467 +0000 UTC m=+1375.046544942" lastFinishedPulling="2026-01-26 23:31:14.387933787 +0000 UTC m=+1378.552641252" observedRunningTime="2026-01-26 23:31:14.940090743 +0000 UTC m=+1379.104798208" watchObservedRunningTime="2026-01-26 23:31:14.947627131 +0000 UTC m=+1379.112334596" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.931052 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941430 4995 generic.go:334] "Generic (PLEG): container finished" podID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" exitCode=0 Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941488 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerDied","Data":"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110"} Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941534 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerDied","Data":"c831199d822b765352d7f3cfddb29be2235d20cab03abeb963d2d581104d23cb"} Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941554 4995 scope.go:117] "RemoveContainer" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953693 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953747 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953783 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953816 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953854 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953874 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953900 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953970 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.992271 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.992422 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts" (OuterVolumeSpecName: "scripts") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.992320 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.008381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9" (OuterVolumeSpecName: "kube-api-access-7w7l9") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "kube-api-access-7w7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.020749 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data" (OuterVolumeSpecName: "config-data") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.040318 4995 scope.go:117] "RemoveContainer" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" Jan 26 23:31:16 crc kubenswrapper[4995]: E0126 23:31:16.040834 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110\": container with ID starting with d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110 not found: ID does not exist" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.040873 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110"} err="failed to get container status \"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110\": rpc error: code = NotFound desc = could not find container \"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110\": container with ID starting with d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110 not found: ID does not exist" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.044423 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.047937 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.055754 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056044 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056073 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056088 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056451 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056521 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056576 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056638 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056691 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.960366 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:31:17 crc kubenswrapper[4995]: I0126 23:31:17.000215 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:31:17 crc kubenswrapper[4995]: I0126 23:31:17.007205 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:31:18 crc kubenswrapper[4995]: I0126 23:31:18.528558 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" path="/var/lib/kubelet/pods/284fb412-d705-4c0a-b11d-74f9074a9b6c/volumes" Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.519543 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.519866 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969310 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969618 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" containerID="cri-o://e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" gracePeriod=30 Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969776 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" containerID="cri-o://5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" gracePeriod=30 Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969824 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" containerID="cri-o://27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" gracePeriod=30 Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969872 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" containerID="cri-o://c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" gracePeriod=30 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.579861 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2nqkb" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" probeResult="failure" output=< Jan 26 23:31:20 crc kubenswrapper[4995]: timeout: failed to connect service ":50051" within 1s Jan 26 23:31:20 crc kubenswrapper[4995]: > Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.965093 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997139 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" exitCode=0 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997241 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997316 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997368 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997388 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997626 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" exitCode=2 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997649 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" exitCode=0 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997656 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" exitCode=0 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997674 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997709 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997719 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"c97dd3f359f663362140f94ede7f9243c229adb056421298fec32584624f10b0"} Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.018316 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040002 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040179 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040234 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040330 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040366 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040392 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040418 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.043212 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.044060 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.044983 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.050221 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts" (OuterVolumeSpecName: "scripts") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.050374 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l" (OuterVolumeSpecName: "kube-api-access-9wz2l") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "kube-api-access-9wz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.066256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.067571 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.084590 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.085064 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085116 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085143 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.085573 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085599 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085616 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.085892 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085919 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085935 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.087332 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087368 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087410 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087704 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087752 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087987 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088010 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088240 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088265 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088490 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088514 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088718 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088739 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088935 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088956 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089210 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089233 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089432 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089455 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089664 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089686 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089911 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089933 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.090174 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.090197 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.090420 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.096821 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.115268 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.131592 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data" (OuterVolumeSpecName: "config-data") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142666 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142698 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142707 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142719 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142727 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142736 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142767 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142794 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.333433 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.341985 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.361687 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362079 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362157 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362180 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362188 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362203 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362212 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362230 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362239 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362270 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362278 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362491 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362512 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362528 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362540 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362554 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.364339 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.366095 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.366408 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.368720 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.371368 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446732 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446776 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446804 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446843 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446870 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446917 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446942 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547860 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547916 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547988 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548047 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548094 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548145 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548585 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548839 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.552317 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.552578 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.552757 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.553085 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.559081 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.569570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.682398 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:22 crc kubenswrapper[4995]: I0126 23:31:22.172293 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:22 crc kubenswrapper[4995]: I0126 23:31:22.527394 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" path="/var/lib/kubelet/pods/e2a868ee-449d-451a-8f70-ec5800231c45/volumes" Jan 26 23:31:23 crc kubenswrapper[4995]: I0126 23:31:23.024996 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296"} Jan 26 23:31:23 crc kubenswrapper[4995]: I0126 23:31:23.025380 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"f905ef058c30c4fbd868dd5ef0e469d865481d80ace2b60c0d346ab24f53efa4"} Jan 26 23:31:24 crc kubenswrapper[4995]: I0126 23:31:24.037138 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246"} Jan 26 23:31:25 crc kubenswrapper[4995]: I0126 23:31:25.049289 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624"} Jan 26 23:31:26 crc kubenswrapper[4995]: I0126 23:31:26.063691 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249"} Jan 26 23:31:26 crc kubenswrapper[4995]: I0126 23:31:26.064066 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:26 crc kubenswrapper[4995]: I0126 23:31:26.090471 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.936746457 podStartE2EDuration="5.090456716s" podCreationTimestamp="2026-01-26 23:31:21 +0000 UTC" firstStartedPulling="2026-01-26 23:31:22.178671283 +0000 UTC m=+1386.343378748" lastFinishedPulling="2026-01-26 23:31:25.332381542 +0000 UTC m=+1389.497089007" observedRunningTime="2026-01-26 23:31:26.088542628 +0000 UTC m=+1390.253250093" watchObservedRunningTime="2026-01-26 23:31:26.090456716 +0000 UTC m=+1390.255164181" Jan 26 23:31:29 crc kubenswrapper[4995]: I0126 23:31:29.564612 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:29 crc kubenswrapper[4995]: I0126 23:31:29.625841 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.139773 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.140549 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2nqkb" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" containerID="cri-o://d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" gracePeriod=2 Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.734803 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.787364 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"fabd6826-906b-4dfc-af45-6d64bacdd794\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.787497 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"fabd6826-906b-4dfc-af45-6d64bacdd794\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.791384 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"fabd6826-906b-4dfc-af45-6d64bacdd794\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.792094 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities" (OuterVolumeSpecName: "utilities") pod "fabd6826-906b-4dfc-af45-6d64bacdd794" (UID: "fabd6826-906b-4dfc-af45-6d64bacdd794"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.793551 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r" (OuterVolumeSpecName: "kube-api-access-l728r") pod "fabd6826-906b-4dfc-af45-6d64bacdd794" (UID: "fabd6826-906b-4dfc-af45-6d64bacdd794"). InnerVolumeSpecName "kube-api-access-l728r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.893532 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.893568 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.933804 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabd6826-906b-4dfc-af45-6d64bacdd794" (UID: "fabd6826-906b-4dfc-af45-6d64bacdd794"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.994783 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143395 4995 generic.go:334] "Generic (PLEG): container finished" podID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" exitCode=0 Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143474 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143451 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16"} Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"db9b7d03b326053c44e82afb8fe738958f31d4b689aa3b9b6e0d3e7411632e71"} Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143688 4995 scope.go:117] "RemoveContainer" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.182419 4995 scope.go:117] "RemoveContainer" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.186196 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.193402 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.217284 4995 scope.go:117] "RemoveContainer" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.236534 4995 scope.go:117] "RemoveContainer" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" Jan 26 23:31:34 crc kubenswrapper[4995]: E0126 23:31:34.237218 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16\": container with ID starting with d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16 not found: ID does not exist" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237246 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16"} err="failed to get container status \"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16\": rpc error: code = NotFound desc = could not find container \"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16\": container with ID starting with d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16 not found: ID does not exist" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237266 4995 scope.go:117] "RemoveContainer" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" Jan 26 23:31:34 crc kubenswrapper[4995]: E0126 23:31:34.237579 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae\": container with ID starting with 061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae not found: ID does not exist" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237634 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae"} err="failed to get container status \"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae\": rpc error: code = NotFound desc = could not find container \"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae\": container with ID starting with 061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae not found: ID does not exist" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237667 4995 scope.go:117] "RemoveContainer" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" Jan 26 23:31:34 crc kubenswrapper[4995]: E0126 23:31:34.238092 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce\": container with ID starting with 227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce not found: ID does not exist" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.238128 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce"} err="failed to get container status \"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce\": rpc error: code = NotFound desc = could not find container \"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce\": container with ID starting with 227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce not found: ID does not exist" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.532859 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" path="/var/lib/kubelet/pods/fabd6826-906b-4dfc-af45-6d64bacdd794/volumes" Jan 26 23:31:40 crc kubenswrapper[4995]: I0126 23:31:40.893989 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:31:40 crc kubenswrapper[4995]: I0126 23:31:40.894744 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:31:51 crc kubenswrapper[4995]: I0126 23:31:51.691696 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.844594 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.852936 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.902233 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:53 crc kubenswrapper[4995]: E0126 23:31:53.903072 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-utilities" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903201 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-utilities" Jan 26 23:31:53 crc kubenswrapper[4995]: E0126 23:31:53.903318 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-content" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903395 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-content" Jan 26 23:31:53 crc kubenswrapper[4995]: E0126 23:31:53.903482 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903561 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903850 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.904612 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.962162 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.969888 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.970163 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" containerID="cri-o://506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" gracePeriod=30 Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.970537 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" containerID="cri-o://99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" gracePeriod=30 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.046290 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.046405 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.062174 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.062414 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" containerID="cri-o://6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922" gracePeriod=30 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.082426 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.082691 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" containerID="cri-o://a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" gracePeriod=30 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.147866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.148003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.149009 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.167655 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.222137 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.371666 4995 generic.go:334] "Generic (PLEG): container finished" podID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" exitCode=143 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.371721 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerDied","Data":"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1"} Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.543675 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" path="/var/lib/kubelet/pods/74804f16-0037-44f0-a6a5-71414a33cee2/volumes" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.736816 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.299670 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372708 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372766 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372870 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372926 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372961 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.373628 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs" (OuterVolumeSpecName: "logs") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390702 4995 generic.go:334] "Generic (PLEG): container finished" podID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" exitCode=0 Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerDied","Data":"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390864 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerDied","Data":"9a7d0124cd7a5360719a3b66cdef998880be62eb0874154eac5904934bc66e9c"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390886 4995 scope.go:117] "RemoveContainer" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.391092 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.401249 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.407524 4995 generic.go:334] "Generic (PLEG): container finished" podID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerID="92cc26c82a9b23a9721c60030809c14c060714d70c702de958b8d81f8d16479b" exitCode=0 Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.407647 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" event={"ID":"a31e3b1c-6d46-44f3-9dee-2e8652ca0807","Type":"ContainerDied","Data":"92cc26c82a9b23a9721c60030809c14c060714d70c702de958b8d81f8d16479b"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.407693 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" event={"ID":"a31e3b1c-6d46-44f3-9dee-2e8652ca0807","Type":"ContainerStarted","Data":"027bfffc02dc6c70e1854db7bc0d78b996ee8cb3498ef38d0dac117a85c79839"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.411251 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78" (OuterVolumeSpecName: "kube-api-access-xkr78") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "kube-api-access-xkr78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.411475 4995 generic.go:334] "Generic (PLEG): container finished" podID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerID="6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922" exitCode=0 Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.411607 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerDied","Data":"6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.445187 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.449926 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data" (OuterVolumeSpecName: "config-data") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475377 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475501 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475558 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475617 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475673 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.481850 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.495974 4995 scope.go:117] "RemoveContainer" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.525868 4995 scope.go:117] "RemoveContainer" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" Jan 26 23:31:55 crc kubenswrapper[4995]: E0126 23:31:55.526292 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6\": container with ID starting with 99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6 not found: ID does not exist" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.526387 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6"} err="failed to get container status \"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6\": rpc error: code = NotFound desc = could not find container \"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6\": container with ID starting with 99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6 not found: ID does not exist" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.526466 4995 scope.go:117] "RemoveContainer" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" Jan 26 23:31:55 crc kubenswrapper[4995]: E0126 23:31:55.527049 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1\": container with ID starting with 506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1 not found: ID does not exist" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.527165 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1"} err="failed to get container status \"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1\": rpc error: code = NotFound desc = could not find container \"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1\": container with ID starting with 506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1 not found: ID does not exist" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.577210 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.591478 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678286 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678434 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678504 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678579 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678622 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678642 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs" (OuterVolumeSpecName: "logs") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678676 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.679081 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.683533 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9" (OuterVolumeSpecName: "kube-api-access-7djc9") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "kube-api-access-7djc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.705209 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.711931 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.733305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data" (OuterVolumeSpecName: "config-data") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.734147 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.741326 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.743535 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781080 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781140 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781152 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781164 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781176 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.430010 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.430008 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerDied","Data":"6780d1fd068d258f993980132b0b6bd2df34b9245721daf2a80227aeaf1d0ca4"} Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.430144 4995 scope.go:117] "RemoveContainer" containerID="6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.491667 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.508870 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.542412 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" path="/var/lib/kubelet/pods/118c105c-80f5-4d0f-94c2-17f3269025ca/volumes" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.543427 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" path="/var/lib/kubelet/pods/dfd66ee0-752c-4d44-92e1-a287384642e2/volumes" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.857660 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.903830 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.903897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.904809 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a31e3b1c-6d46-44f3-9dee-2e8652ca0807" (UID: "a31e3b1c-6d46-44f3-9dee-2e8652ca0807"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.908662 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t" (OuterVolumeSpecName: "kube-api-access-75z2t") pod "a31e3b1c-6d46-44f3-9dee-2e8652ca0807" (UID: "a31e3b1c-6d46-44f3-9dee-2e8652ca0807"). InnerVolumeSpecName "kube-api-access-75z2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.006259 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.006313 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.450676 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.454755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" event={"ID":"a31e3b1c-6d46-44f3-9dee-2e8652ca0807","Type":"ContainerDied","Data":"027bfffc02dc6c70e1854db7bc0d78b996ee8cb3498ef38d0dac117a85c79839"} Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.454799 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027bfffc02dc6c70e1854db7bc0d78b996ee8cb3498ef38d0dac117a85c79839" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.454858 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458301 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" containerID="cri-o://cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" gracePeriod=30 Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458326 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" containerID="cri-o://3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" gracePeriod=30 Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458460 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" containerID="cri-o://f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" gracePeriod=30 Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458454 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" containerID="cri-o://fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" gracePeriod=30 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.199201 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.332725 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333070 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333116 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333171 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333238 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333749 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs" (OuterVolumeSpecName: "logs") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.338311 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj" (OuterVolumeSpecName: "kube-api-access-cmhpj") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "kube-api-access-cmhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.355729 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.371342 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data" (OuterVolumeSpecName: "config-data") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.403009 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434675 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434710 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434722 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434732 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434741 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462549 4995 generic.go:334] "Generic (PLEG): container finished" podID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" exitCode=0 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462608 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462619 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerDied","Data":"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462647 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerDied","Data":"c39928c36c8af1a9535983a878c5e72ae844418dbec585db7b98acb4c5ad7317"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462666 4995 scope.go:117] "RemoveContainer" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465682 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" exitCode=0 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465706 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" exitCode=2 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465717 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" exitCode=0 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465736 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465769 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465778 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.493789 4995 scope.go:117] "RemoveContainer" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.493938 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:31:58 crc kubenswrapper[4995]: E0126 23:31:58.494788 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6\": container with ID starting with a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6 not found: ID does not exist" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.494865 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6"} err="failed to get container status \"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6\": rpc error: code = NotFound desc = could not find container \"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6\": container with ID starting with a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6 not found: ID does not exist" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.502548 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.526487 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" path="/var/lib/kubelet/pods/77a1e608-88ba-44dc-a4fd-86bd6bd980c1/volumes" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.949776 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.964694 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.977598 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.991212 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.999604 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:59 crc kubenswrapper[4995]: I0126 23:31:59.005759 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.006083 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092270 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092420 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092697 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092763 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092792 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092821 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092850 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.093389 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.093590 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.103399 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts" (OuterVolumeSpecName: "scripts") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.114870 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx" (OuterVolumeSpecName: "kube-api-access-km5mx") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "kube-api-access-km5mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.168551 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.189876 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.191451 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194242 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194274 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194285 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194298 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194310 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194319 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194328 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.217480 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data" (OuterVolumeSpecName: "config-data") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.296276 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369196 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369585 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369608 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369623 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369632 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369647 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369656 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369673 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369680 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369709 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369717 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369734 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369742 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369758 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369765 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369778 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerName="mariadb-account-delete" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369786 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerName="mariadb-account-delete" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369797 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369805 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369975 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369994 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370008 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370019 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370031 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerName="mariadb-account-delete" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370042 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370049 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370061 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370072 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370801 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.381649 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.473590 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.474738 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.477841 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490067 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" exitCode=0 Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490150 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490155 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246"} Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490234 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"f905ef058c30c4fbd868dd5ef0e469d865481d80ace2b60c0d346ab24f53efa4"} Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490266 4995 scope.go:117] "RemoveContainer" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.499355 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.499436 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.505286 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.523069 4995 scope.go:117] "RemoveContainer" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.539951 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" path="/var/lib/kubelet/pods/360b1483-8046-4c4c-920d-69387e2fbbed/volumes" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.540753 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" path="/var/lib/kubelet/pods/a31e3b1c-6d46-44f3-9dee-2e8652ca0807/volumes" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.541842 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" path="/var/lib/kubelet/pods/ab224b66-6f5e-4e78-bdc4-e913dcb2250a/volumes" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.542537 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.560448 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.566936 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.567754 4995 scope.go:117] "RemoveContainer" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.572557 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.573721 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.575559 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.576425 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.576302 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.603831 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.603910 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.603957 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.604146 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.604901 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.616436 4995 scope.go:117] "RemoveContainer" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.622912 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.644387 4995 scope.go:117] "RemoveContainer" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.645206 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249\": container with ID starting with f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249 not found: ID does not exist" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.645236 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249"} err="failed to get container status \"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249\": rpc error: code = NotFound desc = could not find container \"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249\": container with ID starting with f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.645257 4995 scope.go:117] "RemoveContainer" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.646394 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624\": container with ID starting with 3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624 not found: ID does not exist" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.646462 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624"} err="failed to get container status \"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624\": rpc error: code = NotFound desc = could not find container \"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624\": container with ID starting with 3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.646521 4995 scope.go:117] "RemoveContainer" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.647144 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246\": container with ID starting with fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246 not found: ID does not exist" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.647166 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246"} err="failed to get container status \"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246\": rpc error: code = NotFound desc = could not find container \"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246\": container with ID starting with fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.647183 4995 scope.go:117] "RemoveContainer" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.647540 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296\": container with ID starting with cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296 not found: ID does not exist" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.647565 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296"} err="failed to get container status \"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296\": rpc error: code = NotFound desc = could not find container \"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296\": container with ID starting with cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.688163 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705249 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705337 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705370 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705407 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705540 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705681 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705839 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705992 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.706160 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.707011 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.723937 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.791919 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.809911 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810240 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810274 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810311 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810339 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810394 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810416 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810434 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.819209 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.821618 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.822134 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.826734 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.827078 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.833738 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.836240 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.850193 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.910866 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.367328 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.512803 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8jh4s" event={"ID":"e5c0fe20-e8a0-4e46-889c-f7484847605c","Type":"ContainerStarted","Data":"3e32960f3dc06b2e789bd686d4b6b46e8c8920f01793f07e702510a396be236a"} Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.559462 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:01 crc kubenswrapper[4995]: W0126 23:32:01.561522 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb49764a_224f_47ef_ad9b_016ac609fc81.slice/crio-6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286 WatchSource:0}: Error finding container 6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286: Status 404 returned error can't find the container with id 6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286 Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.718743 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:01 crc kubenswrapper[4995]: W0126 23:32:01.719719 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19dab45_658a_43e5_93f9_7405f4e265b8.slice/crio-058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc WatchSource:0}: Error finding container 058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc: Status 404 returned error can't find the container with id 058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.523789 4995 generic.go:334] "Generic (PLEG): container finished" podID="db49764a-224f-47ef-ad9b-016ac609fc81" containerID="42bdbf79e7939fb4f6bd922600909eb049e24579c79123df69d4d9b5938f3988" exitCode=0 Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.527499 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" path="/var/lib/kubelet/pods/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe/volumes" Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528349 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" event={"ID":"db49764a-224f-47ef-ad9b-016ac609fc81","Type":"ContainerDied","Data":"42bdbf79e7939fb4f6bd922600909eb049e24579c79123df69d4d9b5938f3988"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528385 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" event={"ID":"db49764a-224f-47ef-ad9b-016ac609fc81","Type":"ContainerStarted","Data":"6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528401 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528481 4995 generic.go:334] "Generic (PLEG): container finished" podID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerID="9c92253ce611dea0df9e21427e5984e7db9bccf73045bb24769fa3dbad187a39" exitCode=0 Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528521 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8jh4s" event={"ID":"e5c0fe20-e8a0-4e46-889c-f7484847605c","Type":"ContainerDied","Data":"9c92253ce611dea0df9e21427e5984e7db9bccf73045bb24769fa3dbad187a39"} Jan 26 23:32:03 crc kubenswrapper[4995]: I0126 23:32:03.538154 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.150385 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.158916 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.293726 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"e5c0fe20-e8a0-4e46-889c-f7484847605c\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.293833 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"db49764a-224f-47ef-ad9b-016ac609fc81\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.293866 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"e5c0fe20-e8a0-4e46-889c-f7484847605c\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.294018 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"db49764a-224f-47ef-ad9b-016ac609fc81\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.295213 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db49764a-224f-47ef-ad9b-016ac609fc81" (UID: "db49764a-224f-47ef-ad9b-016ac609fc81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.295554 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5c0fe20-e8a0-4e46-889c-f7484847605c" (UID: "e5c0fe20-e8a0-4e46-889c-f7484847605c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.299903 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl" (OuterVolumeSpecName: "kube-api-access-qqhxl") pod "db49764a-224f-47ef-ad9b-016ac609fc81" (UID: "db49764a-224f-47ef-ad9b-016ac609fc81"). InnerVolumeSpecName "kube-api-access-qqhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.300564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs" (OuterVolumeSpecName: "kube-api-access-hwhjs") pod "e5c0fe20-e8a0-4e46-889c-f7484847605c" (UID: "e5c0fe20-e8a0-4e46-889c-f7484847605c"). InnerVolumeSpecName "kube-api-access-hwhjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395883 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395925 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395936 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395945 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.552821 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.552860 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" event={"ID":"db49764a-224f-47ef-ad9b-016ac609fc81","Type":"ContainerDied","Data":"6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.552908 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.557885 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.561587 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8jh4s" event={"ID":"e5c0fe20-e8a0-4e46-889c-f7484847605c","Type":"ContainerDied","Data":"3e32960f3dc06b2e789bd686d4b6b46e8c8920f01793f07e702510a396be236a"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.561610 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e32960f3dc06b2e789bd686d4b6b46e8c8920f01793f07e702510a396be236a" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.561690 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:05 crc kubenswrapper[4995]: I0126 23:32:05.572386 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c"} Jan 26 23:32:05 crc kubenswrapper[4995]: I0126 23:32:05.572600 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:05 crc kubenswrapper[4995]: I0126 23:32:05.604839 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.14724098 podStartE2EDuration="5.604822972s" podCreationTimestamp="2026-01-26 23:32:00 +0000 UTC" firstStartedPulling="2026-01-26 23:32:01.72187806 +0000 UTC m=+1425.886585525" lastFinishedPulling="2026-01-26 23:32:05.179460052 +0000 UTC m=+1429.344167517" observedRunningTime="2026-01-26 23:32:05.601458597 +0000 UTC m=+1429.766166062" watchObservedRunningTime="2026-01-26 23:32:05.604822972 +0000 UTC m=+1429.769530437" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.182807 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:06 crc kubenswrapper[4995]: E0126 23:32:06.183159 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" containerName="mariadb-account-create-update" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183185 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" containerName="mariadb-account-create-update" Jan 26 23:32:06 crc kubenswrapper[4995]: E0126 23:32:06.183215 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerName="mariadb-database-create" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183223 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerName="mariadb-database-create" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183381 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" containerName="mariadb-account-create-update" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183394 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerName="mariadb-database-create" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.184007 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.186829 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rmsbz" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.187166 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.193758 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226534 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226606 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226806 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328628 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328689 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328742 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.333775 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.335818 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.344406 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.344812 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.506352 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.981960 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:06 crc kubenswrapper[4995]: W0126 23:32:06.984488 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc265038c_ebe8_4aa1_acda_f45361fbd885.slice/crio-9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1 WatchSource:0}: Error finding container 9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1: Status 404 returned error can't find the container with id 9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1 Jan 26 23:32:07 crc kubenswrapper[4995]: I0126 23:32:07.611719 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerStarted","Data":"0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427"} Jan 26 23:32:07 crc kubenswrapper[4995]: I0126 23:32:07.612014 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerStarted","Data":"9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1"} Jan 26 23:32:07 crc kubenswrapper[4995]: I0126 23:32:07.629567 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" podStartSLOduration=1.6295508650000001 podStartE2EDuration="1.629550865s" podCreationTimestamp="2026-01-26 23:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:07.627118894 +0000 UTC m=+1431.791826359" watchObservedRunningTime="2026-01-26 23:32:07.629550865 +0000 UTC m=+1431.794258320" Jan 26 23:32:09 crc kubenswrapper[4995]: I0126 23:32:09.628689 4995 generic.go:334] "Generic (PLEG): container finished" podID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerID="0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427" exitCode=0 Jan 26 23:32:09 crc kubenswrapper[4995]: I0126 23:32:09.628813 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerDied","Data":"0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427"} Jan 26 23:32:10 crc kubenswrapper[4995]: I0126 23:32:10.893796 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:32:10 crc kubenswrapper[4995]: I0126 23:32:10.894297 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.000153 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007347 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007387 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007436 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007455 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.017434 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x" (OuterVolumeSpecName: "kube-api-access-gfj9x") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "kube-api-access-gfj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.018514 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.050779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.066052 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data" (OuterVolumeSpecName: "config-data") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.108486 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.108707 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.108926 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.109057 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.647826 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerDied","Data":"9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1"} Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.647864 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.647928 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.927148 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:11 crc kubenswrapper[4995]: E0126 23:32:11.928645 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerName="watcher-kuttl-db-sync" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.928776 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerName="watcher-kuttl-db-sync" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.929072 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerName="watcher-kuttl-db-sync" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.930343 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.932422 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rmsbz" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.932429 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.941709 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.956333 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.970838 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.974725 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.980430 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.022677 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.022968 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023054 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023165 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023252 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023356 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023444 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023538 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023630 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023818 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023897 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.034075 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.035007 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.038042 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.050289 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125564 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125631 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125675 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125696 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125713 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125975 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126095 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126153 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126172 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126195 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126266 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126310 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126459 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126503 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126551 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.130015 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.131597 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.131735 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.132317 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.132698 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.133763 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.135926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.136534 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.144886 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.145182 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227688 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227764 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227819 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227847 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.228350 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.228362 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.231825 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.231939 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.232390 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.246808 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.249657 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.291520 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.348529 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.719184 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.851683 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.925608 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: W0126 23:32:12.958152 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55417497_6ca7_42c8_ba53_58da68837328.slice/crio-a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf WatchSource:0}: Error finding container a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf: Status 404 returned error can't find the container with id a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.665210 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerStarted","Data":"b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.665616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerStarted","Data":"a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666464 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerStarted","Data":"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666506 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerStarted","Data":"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666516 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerStarted","Data":"68985bac7f5331913df7825c5a17b60e9e19f5cb1a899bdf14134c0eda5b546b"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666654 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.667785 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerStarted","Data":"6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.667896 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerStarted","Data":"fe618fa252c29164da67ca6fb2b81b5cfcd348cb451091a77890f43ff25b2bdf"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.686638 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.686616847 podStartE2EDuration="2.686616847s" podCreationTimestamp="2026-01-26 23:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:13.680964115 +0000 UTC m=+1437.845671580" watchObservedRunningTime="2026-01-26 23:32:13.686616847 +0000 UTC m=+1437.851324312" Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.701849 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.701828168 podStartE2EDuration="2.701828168s" podCreationTimestamp="2026-01-26 23:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:13.701009477 +0000 UTC m=+1437.865716952" watchObservedRunningTime="2026-01-26 23:32:13.701828168 +0000 UTC m=+1437.866535633" Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.722600 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.722577517 podStartE2EDuration="1.722577517s" podCreationTimestamp="2026-01-26 23:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:13.714572047 +0000 UTC m=+1437.879279522" watchObservedRunningTime="2026-01-26 23:32:13.722577517 +0000 UTC m=+1437.887284972" Jan 26 23:32:15 crc kubenswrapper[4995]: I0126 23:32:15.827165 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:17 crc kubenswrapper[4995]: I0126 23:32:17.249830 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:17 crc kubenswrapper[4995]: I0126 23:32:17.349881 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.250704 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.292363 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.349792 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.402780 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.402842 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.437991 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.757281 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.768905 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.786697 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.799885 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.767741 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.768076 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" containerID="cri-o://16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.769529 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" containerID="cri-o://380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.769612 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" containerID="cri-o://148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.769531 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" containerID="cri-o://d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.782207 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.168:3000/\": read tcp 10.217.0.2:45262->10.217.0.168:3000: read: connection reset by peer" Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.795844 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" exitCode=0 Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796201 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" exitCode=2 Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796215 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" exitCode=0 Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.795947 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c"} Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796258 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800"} Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796277 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb"} Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.729022 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.746626 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.799170 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.800500 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.816321 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.846972 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.847172 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" containerID="cri-o://b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986" gracePeriod=30 Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.912814 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.913091 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.930346 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.930626 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" containerID="cri-o://1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" gracePeriod=30 Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.930963 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" containerID="cri-o://d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" gracePeriod=30 Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.975144 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.975339 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" containerID="cri-o://6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc" gracePeriod=30 Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.018976 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.019292 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.020074 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.048232 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.124929 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.612763 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.694020 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736047 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736435 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736464 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736511 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736587 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736642 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736700 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736801 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736817 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.739347 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.739373 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.743756 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb" (OuterVolumeSpecName: "kube-api-access-rdltb") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "kube-api-access-rdltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.744303 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts" (OuterVolumeSpecName: "scripts") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.785262 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.805758 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.837939 4995 generic.go:334] "Generic (PLEG): container finished" podID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" exitCode=143 Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.838018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerDied","Data":"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.839472 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerStarted","Data":"277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.839515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerStarted","Data":"9d8255fbbc8921fee6dd6a4844a76364f4032c5800dd2e1becc6405460f84172"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.841246 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.841422 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.843177 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.843211 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.843223 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845831 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" exitCode=0 Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845873 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845906 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845924 4995 scope.go:117] "RemoveContainer" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.846443 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.862676 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" podStartSLOduration=1.862654748 podStartE2EDuration="1.862654748s" podCreationTimestamp="2026-01-26 23:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:29.852844012 +0000 UTC m=+1454.017551477" watchObservedRunningTime="2026-01-26 23:32:29.862654748 +0000 UTC m=+1454.027362213" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.865222 4995 scope.go:117] "RemoveContainer" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.901950 4995 scope.go:117] "RemoveContainer" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.911392 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data" (OuterVolumeSpecName: "config-data") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.927827 4995 scope.go:117] "RemoveContainer" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.944278 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.944314 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.957221 4995 scope.go:117] "RemoveContainer" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.957686 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c\": container with ID starting with d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c not found: ID does not exist" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.957754 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c"} err="failed to get container status \"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c\": rpc error: code = NotFound desc = could not find container \"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c\": container with ID starting with d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c not found: ID does not exist" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.957783 4995 scope.go:117] "RemoveContainer" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.958153 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800\": container with ID starting with 148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800 not found: ID does not exist" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958201 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800"} err="failed to get container status \"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800\": rpc error: code = NotFound desc = could not find container \"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800\": container with ID starting with 148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800 not found: ID does not exist" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958239 4995 scope.go:117] "RemoveContainer" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.958593 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180\": container with ID starting with 380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180 not found: ID does not exist" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958650 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180"} err="failed to get container status \"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180\": rpc error: code = NotFound desc = could not find container \"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180\": container with ID starting with 380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180 not found: ID does not exist" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958684 4995 scope.go:117] "RemoveContainer" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.958946 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb\": container with ID starting with 16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb not found: ID does not exist" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958970 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb"} err="failed to get container status \"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb\": rpc error: code = NotFound desc = could not find container \"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb\": container with ID starting with 16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb not found: ID does not exist" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.196848 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.203152 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235472 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235883 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235906 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235924 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235932 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235952 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235961 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235977 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235985 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236211 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236229 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236240 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236259 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.238054 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.242806 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.243002 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.243038 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.251758 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350260 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350305 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350358 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350414 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350548 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350618 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350648 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452286 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452402 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452431 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452506 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452530 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453271 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453398 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453781 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.458651 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.464709 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.466498 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.470328 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.474647 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.475944 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.538428 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" path="/var/lib/kubelet/pods/a19dab45-658a-43e5-93f9-7405f4e265b8/volumes" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.539307 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" path="/var/lib/kubelet/pods/c265038c-ebe8-4aa1-acda-f45361fbd885/volumes" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.554634 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.616089 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656134 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656704 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656766 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656835 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656968 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.660408 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs" (OuterVolumeSpecName: "logs") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.663622 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s" (OuterVolumeSpecName: "kube-api-access-lpw9s") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "kube-api-access-lpw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.683325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.703471 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data" (OuterVolumeSpecName: "config-data") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.717243 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.741557 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761286 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761321 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761334 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761346 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761358 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761371 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.858644 4995 generic.go:334] "Generic (PLEG): container finished" podID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerID="277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e" exitCode=0 Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.858723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerDied","Data":"277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e"} Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862569 4995 generic.go:334] "Generic (PLEG): container finished" podID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" exitCode=0 Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862602 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerDied","Data":"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16"} Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862618 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerDied","Data":"68985bac7f5331913df7825c5a17b60e9e19f5cb1a899bdf14134c0eda5b546b"} Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862635 4995 scope.go:117] "RemoveContainer" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862667 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.887347 4995 scope.go:117] "RemoveContainer" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.901376 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910256 4995 scope.go:117] "RemoveContainer" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.910731 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16\": container with ID starting with d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16 not found: ID does not exist" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910794 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16"} err="failed to get container status \"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16\": rpc error: code = NotFound desc = could not find container \"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16\": container with ID starting with d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16 not found: ID does not exist" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910825 4995 scope.go:117] "RemoveContainer" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910956 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.911148 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6\": container with ID starting with 1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6 not found: ID does not exist" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.911183 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6"} err="failed to get container status \"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6\": rpc error: code = NotFound desc = could not find container \"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6\": container with ID starting with 1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6 not found: ID does not exist" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.999251 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:31 crc kubenswrapper[4995]: W0126 23:32:31.001531 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod632dc482_0650_4bfc_a47c_5a573888ab9a.slice/crio-7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7 WatchSource:0}: Error finding container 7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7: Status 404 returned error can't find the container with id 7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7 Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.086311 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.872547 4995 generic.go:334] "Generic (PLEG): container finished" podID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerID="6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc" exitCode=0 Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.872707 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerDied","Data":"6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.873081 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerDied","Data":"fe618fa252c29164da67ca6fb2b81b5cfcd348cb451091a77890f43ff25b2bdf"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.873096 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe618fa252c29164da67ca6fb2b81b5cfcd348cb451091a77890f43ff25b2bdf" Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.874789 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.874837 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.881569 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992706 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992808 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992845 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992942 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.993023 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.993981 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs" (OuterVolumeSpecName: "logs") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:31.998490 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz" (OuterVolumeSpecName: "kube-api-access-8jjtz") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "kube-api-access-8jjtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.040147 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.061524 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data" (OuterVolumeSpecName: "config-data") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.085000 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095215 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095256 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095273 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095286 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095298 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.204611 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.297823 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.298272 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.299129 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23d44b8e-50b6-4446-a75b-ca68e79ff57f" (UID: "23d44b8e-50b6-4446-a75b-ca68e79ff57f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.302501 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z" (OuterVolumeSpecName: "kube-api-access-7tj2z") pod "23d44b8e-50b6-4446-a75b-ca68e79ff57f" (UID: "23d44b8e-50b6-4446-a75b-ca68e79ff57f"). InnerVolumeSpecName "kube-api-access-7tj2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.400442 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.400480 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.530036 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" path="/var/lib/kubelet/pods/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c/volumes" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.885487 4995 generic.go:334] "Generic (PLEG): container finished" podID="55417497-6ca7-42c8-ba53-58da68837328" containerID="b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986" exitCode=0 Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.885570 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerDied","Data":"b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986"} Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.889242 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa"} Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.892630 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.893691 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.893850 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.894218 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerDied","Data":"9d8255fbbc8921fee6dd6a4844a76364f4032c5800dd2e1becc6405460f84172"} Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.894259 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d8255fbbc8921fee6dd6a4844a76364f4032c5800dd2e1becc6405460f84172" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.928568 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.935458 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010750 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010797 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010894 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010916 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010972 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.011659 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs" (OuterVolumeSpecName: "logs") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.015328 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w" (OuterVolumeSpecName: "kube-api-access-89r2w") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "kube-api-access-89r2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.036737 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.057968 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.060875 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data" (OuterVolumeSpecName: "config-data") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.104496 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112618 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112663 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112686 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112704 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112741 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112759 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.823957 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.832040 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.848689 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.857586 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.864229 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.870663 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.903031 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerDied","Data":"a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf"} Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.903087 4995 scope.go:117] "RemoveContainer" containerID="b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.903049 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.905640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48"} Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.943450 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.952807 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.541977 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" path="/var/lib/kubelet/pods/23d44b8e-50b6-4446-a75b-ca68e79ff57f/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.542938 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55417497-6ca7-42c8-ba53-58da68837328" path="/var/lib/kubelet/pods/55417497-6ca7-42c8-ba53-58da68837328/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.543563 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" path="/var/lib/kubelet/pods/b24eb3bf-4d35-4163-962f-f3ad03f82019/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.544576 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" path="/var/lib/kubelet/pods/db49764a-224f-47ef-ad9b-016ac609fc81/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.545034 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" path="/var/lib/kubelet/pods/e5c0fe20-e8a0-4e46-889c-f7484847605c/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.914566 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c"} Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915340 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" containerID="cri-o://31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915425 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" containerID="cri-o://83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915445 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" containerID="cri-o://f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915383 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915425 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" containerID="cri-o://70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.984266 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.749202592 podStartE2EDuration="4.984249978s" podCreationTimestamp="2026-01-26 23:32:30 +0000 UTC" firstStartedPulling="2026-01-26 23:32:31.003178854 +0000 UTC m=+1455.167886319" lastFinishedPulling="2026-01-26 23:32:34.23822625 +0000 UTC m=+1458.402933705" observedRunningTime="2026-01-26 23:32:34.956917224 +0000 UTC m=+1459.121624689" watchObservedRunningTime="2026-01-26 23:32:34.984249978 +0000 UTC m=+1459.148957443" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.047940 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048241 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048257 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048276 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048283 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048296 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048302 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048321 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048326 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048337 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerName="mariadb-account-delete" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048343 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerName="mariadb-account-delete" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048472 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048482 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048496 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048506 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048516 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerName="mariadb-account-delete" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.049000 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.073185 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.133771 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.134919 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.144271 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.151356 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.151412 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.170317 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.252929 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.252980 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.253020 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.253047 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.253733 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.278156 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.354738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.354785 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.355622 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.372674 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.384842 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.447832 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.852519 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928542 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" exitCode=2 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928574 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" exitCode=0 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928586 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" exitCode=0 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928617 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48"} Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928664 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa"} Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928681 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3"} Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.929708 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-22m6m" event={"ID":"a73a610c-0780-46cb-9f01-09b48049748d","Type":"ContainerStarted","Data":"38f4dd4710cc6d70fb81ffa7e151b462478364abb1af0aabfd4fab35dfd092aa"} Jan 26 23:32:35 crc kubenswrapper[4995]: W0126 23:32:35.989920 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3461eb3_3b0d_489f_875c_bab8e4f00694.slice/crio-a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8 WatchSource:0}: Error finding container a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8: Status 404 returned error can't find the container with id a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.991493 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.720711 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.778878 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.778963 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779048 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779085 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779219 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779244 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779355 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779375 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779552 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779745 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779995 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.780010 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.786245 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc" (OuterVolumeSpecName: "kube-api-access-dqdzc") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "kube-api-access-dqdzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.786335 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts" (OuterVolumeSpecName: "scripts") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.814552 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.824898 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.855246 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.871274 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data" (OuterVolumeSpecName: "config-data") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881641 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881672 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881684 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881692 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881701 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881710 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.942811 4995 generic.go:334] "Generic (PLEG): container finished" podID="a73a610c-0780-46cb-9f01-09b48049748d" containerID="b4b16b6f1cc961085f1980b33bb732c8fc0fbcf31eda7643a2f07d72636e35f6" exitCode=0 Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.942876 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-22m6m" event={"ID":"a73a610c-0780-46cb-9f01-09b48049748d","Type":"ContainerDied","Data":"b4b16b6f1cc961085f1980b33bb732c8fc0fbcf31eda7643a2f07d72636e35f6"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.947536 4995 generic.go:334] "Generic (PLEG): container finished" podID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerID="4f43eaafefb61a73772d9d42e692be3b8d70484a9a76ac96db06e9b550ed122a" exitCode=0 Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.947616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" event={"ID":"f3461eb3-3b0d-489f-875c-bab8e4f00694","Type":"ContainerDied","Data":"4f43eaafefb61a73772d9d42e692be3b8d70484a9a76ac96db06e9b550ed122a"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.947654 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" event={"ID":"f3461eb3-3b0d-489f-875c-bab8e4f00694","Type":"ContainerStarted","Data":"a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950343 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" exitCode=0 Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950367 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950402 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950433 4995 scope.go:117] "RemoveContainer" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950653 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.971356 4995 scope.go:117] "RemoveContainer" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.993907 4995 scope.go:117] "RemoveContainer" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.000405 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.008701 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021304 4995 scope.go:117] "RemoveContainer" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021439 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021752 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021768 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021784 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021790 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021803 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021810 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021821 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021827 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021981 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.022001 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.022010 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.022018 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.023337 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.024994 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.028732 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.028994 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.036231 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.059952 4995 scope.go:117] "RemoveContainer" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.060717 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c\": container with ID starting with 83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c not found: ID does not exist" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.060748 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c"} err="failed to get container status \"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c\": rpc error: code = NotFound desc = could not find container \"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c\": container with ID starting with 83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.060769 4995 scope.go:117] "RemoveContainer" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.062367 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48\": container with ID starting with 70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48 not found: ID does not exist" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.062419 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48"} err="failed to get container status \"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48\": rpc error: code = NotFound desc = could not find container \"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48\": container with ID starting with 70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48 not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.062474 4995 scope.go:117] "RemoveContainer" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.063210 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa\": container with ID starting with f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa not found: ID does not exist" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.063362 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa"} err="failed to get container status \"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa\": rpc error: code = NotFound desc = could not find container \"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa\": container with ID starting with f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.063462 4995 scope.go:117] "RemoveContainer" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.064492 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3\": container with ID starting with 31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3 not found: ID does not exist" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.064628 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3"} err="failed to get container status \"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3\": rpc error: code = NotFound desc = could not find container \"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3\": container with ID starting with 31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3 not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.083981 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084024 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084092 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084128 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084177 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084218 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084235 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084254 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185446 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185567 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185614 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185655 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185688 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185716 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185789 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185838 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.186692 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.186880 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.190798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.190866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.195632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.195924 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.196493 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.199553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.338356 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.769965 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.958276 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"de4709385c905c889d0404b4681905a6e961420de6f40ec0154a0b2ff42a1386"} Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.448685 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.453289 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508145 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"f3461eb3-3b0d-489f-875c-bab8e4f00694\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508229 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"a73a610c-0780-46cb-9f01-09b48049748d\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508265 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"a73a610c-0780-46cb-9f01-09b48049748d\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508468 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"f3461eb3-3b0d-489f-875c-bab8e4f00694\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508970 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3461eb3-3b0d-489f-875c-bab8e4f00694" (UID: "f3461eb3-3b0d-489f-875c-bab8e4f00694"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.509740 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a73a610c-0780-46cb-9f01-09b48049748d" (UID: "a73a610c-0780-46cb-9f01-09b48049748d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.513695 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h" (OuterVolumeSpecName: "kube-api-access-vzs4h") pod "a73a610c-0780-46cb-9f01-09b48049748d" (UID: "a73a610c-0780-46cb-9f01-09b48049748d"). InnerVolumeSpecName "kube-api-access-vzs4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.519386 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c" (OuterVolumeSpecName: "kube-api-access-2mp4c") pod "f3461eb3-3b0d-489f-875c-bab8e4f00694" (UID: "f3461eb3-3b0d-489f-875c-bab8e4f00694"). InnerVolumeSpecName "kube-api-access-2mp4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.529902 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" path="/var/lib/kubelet/pods/632dc482-0650-4bfc-a47c-5a573888ab9a/volumes" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610211 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610251 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610267 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610281 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.971853 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.971936 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-22m6m" event={"ID":"a73a610c-0780-46cb-9f01-09b48049748d","Type":"ContainerDied","Data":"38f4dd4710cc6d70fb81ffa7e151b462478364abb1af0aabfd4fab35dfd092aa"} Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.972900 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f4dd4710cc6d70fb81ffa7e151b462478364abb1af0aabfd4fab35dfd092aa" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.974091 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" event={"ID":"f3461eb3-3b0d-489f-875c-bab8e4f00694","Type":"ContainerDied","Data":"a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8"} Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.974139 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.974197 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.978867 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd"} Jan 26 23:32:39 crc kubenswrapper[4995]: I0126 23:32:39.993571 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8"} Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.612552 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:32:40 crc kubenswrapper[4995]: E0126 23:32:40.612950 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerName="mariadb-account-create-update" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.612973 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerName="mariadb-account-create-update" Jan 26 23:32:40 crc kubenswrapper[4995]: E0126 23:32:40.612988 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73a610c-0780-46cb-9f01-09b48049748d" containerName="mariadb-database-create" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.612997 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73a610c-0780-46cb-9f01-09b48049748d" containerName="mariadb-database-create" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.613193 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerName="mariadb-account-create-update" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.613214 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73a610c-0780-46cb-9f01-09b48049748d" containerName="mariadb-database-create" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.613714 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.618341 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-bhj8k" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.618658 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.625409 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743314 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743719 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743738 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845491 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845614 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845636 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.850136 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.850647 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.850957 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.866557 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.893530 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.893751 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.893892 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.894579 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.894713 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570" gracePeriod=600 Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.929256 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:41 crc kubenswrapper[4995]: I0126 23:32:41.026077 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039"} Jan 26 23:32:41 crc kubenswrapper[4995]: I0126 23:32:41.496188 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:32:41 crc kubenswrapper[4995]: W0126 23:32:41.518046 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a50a8e0_765f_4f78_8204_78064fe55510.slice/crio-67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5 WatchSource:0}: Error finding container 67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5: Status 404 returned error can't find the container with id 67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5 Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036530 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570" exitCode=0 Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036594 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036849 4995 scope.go:117] "RemoveContainer" containerID="45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69" Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.038563 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerStarted","Data":"fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.038604 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerStarted","Data":"67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.040634 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.040822 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.080185 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" podStartSLOduration=2.08017094 podStartE2EDuration="2.08017094s" podCreationTimestamp="2026-01-26 23:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:42.079728089 +0000 UTC m=+1466.244435554" watchObservedRunningTime="2026-01-26 23:32:42.08017094 +0000 UTC m=+1466.244878405" Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.104085 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.629051124 podStartE2EDuration="5.104072398s" podCreationTimestamp="2026-01-26 23:32:37 +0000 UTC" firstStartedPulling="2026-01-26 23:32:37.774144259 +0000 UTC m=+1461.938851724" lastFinishedPulling="2026-01-26 23:32:41.249165533 +0000 UTC m=+1465.413872998" observedRunningTime="2026-01-26 23:32:42.097417622 +0000 UTC m=+1466.262125077" watchObservedRunningTime="2026-01-26 23:32:42.104072398 +0000 UTC m=+1466.268779853" Jan 26 23:32:44 crc kubenswrapper[4995]: I0126 23:32:44.069702 4995 generic.go:334] "Generic (PLEG): container finished" podID="1a50a8e0-765f-4f78-8204-78064fe55510" containerID="fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8" exitCode=0 Jan 26 23:32:44 crc kubenswrapper[4995]: I0126 23:32:44.069800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerDied","Data":"fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8"} Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.575069 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622704 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622829 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622898 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622934 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.628321 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.628688 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw" (OuterVolumeSpecName: "kube-api-access-r8vmw") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "kube-api-access-r8vmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.650421 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.667170 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data" (OuterVolumeSpecName: "config-data") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.724971 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.725196 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.725264 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.725320 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.123593 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerDied","Data":"67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5"} Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.123965 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.123810 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.859039 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: E0126 23:32:46.860245 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" containerName="watcher-kuttl-db-sync" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.860342 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" containerName="watcher-kuttl-db-sync" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.860594 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" containerName="watcher-kuttl-db-sync" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.861347 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.863224 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-bhj8k" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.870689 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.875475 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.920549 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.921966 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.924036 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.931828 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.932878 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.934397 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945630 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945666 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945728 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.950446 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.000584 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047921 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047957 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047979 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048042 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048087 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048178 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048221 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048243 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048425 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048481 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048526 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048544 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048670 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048711 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048813 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.052587 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.052618 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.053052 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.071654 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150526 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150801 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150823 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150848 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150880 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150896 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150938 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151319 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151357 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151396 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151434 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151917 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.154749 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.154906 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.154998 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.155161 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.155228 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.156275 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.156632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.168409 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.170321 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.171443 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.180567 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.237248 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.249190 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.634692 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: W0126 23:32:47.639834 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb3c5f3_cb09_4f84_bcf6_79b0bebf2602.slice/crio-4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656 WatchSource:0}: Error finding container 4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656: Status 404 returned error can't find the container with id 4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656 Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.738675 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: W0126 23:32:47.755458 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb459a34f_abd7_4350_8b91_c57b5124cbcf.slice/crio-86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120 WatchSource:0}: Error finding container 86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120: Status 404 returned error can't find the container with id 86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120 Jan 26 23:32:47 crc kubenswrapper[4995]: W0126 23:32:47.857660 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32336662_bff8_4aca_afa4_2039d421a770.slice/crio-9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6 WatchSource:0}: Error finding container 9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6: Status 404 returned error can't find the container with id 9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6 Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.868178 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.140584 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerStarted","Data":"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.140628 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerStarted","Data":"4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.142794 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerStarted","Data":"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.142849 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerStarted","Data":"9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.144554 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerStarted","Data":"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.144589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerStarted","Data":"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.144599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerStarted","Data":"86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.145485 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.146519 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": dial tcp 10.217.0.180:9322: connect: connection refused" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.157346 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.157327684 podStartE2EDuration="2.157327684s" podCreationTimestamp="2026-01-26 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:48.1551827 +0000 UTC m=+1472.319890175" watchObservedRunningTime="2026-01-26 23:32:48.157327684 +0000 UTC m=+1472.322035149" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.175166 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.17514877 podStartE2EDuration="2.17514877s" podCreationTimestamp="2026-01-26 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:48.173179771 +0000 UTC m=+1472.337887236" watchObservedRunningTime="2026-01-26 23:32:48.17514877 +0000 UTC m=+1472.339856235" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.194361 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.19433654 podStartE2EDuration="2.19433654s" podCreationTimestamp="2026-01-26 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:48.188390642 +0000 UTC m=+1472.353098107" watchObservedRunningTime="2026-01-26 23:32:48.19433654 +0000 UTC m=+1472.359044015" Jan 26 23:32:51 crc kubenswrapper[4995]: I0126 23:32:51.230564 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:52 crc kubenswrapper[4995]: I0126 23:32:52.180846 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:52 crc kubenswrapper[4995]: I0126 23:32:52.239199 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.181142 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.212624 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.238275 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.243338 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.249380 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.288835 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.314370 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:58 crc kubenswrapper[4995]: I0126 23:32:58.235688 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:58 crc kubenswrapper[4995]: I0126 23:32:58.317538 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:58 crc kubenswrapper[4995]: I0126 23:32:58.392978 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.395913 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.396713 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" containerID="cri-o://6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.397652 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" containerID="cri-o://b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.397748 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" containerID="cri-o://fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.397814 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" containerID="cri-o://25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.409882 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.177:3000/\": EOF" Jan 26 23:33:00 crc kubenswrapper[4995]: E0126 23:33:00.529049 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d5b5d8b_4be0_469b_950f_0dbee7966330.slice/crio-conmon-fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039.scope\": RecentStats: unable to find data in memory cache]" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.267676 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" exitCode=0 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268302 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" exitCode=2 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268376 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" exitCode=0 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.267746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3"} Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039"} Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268610 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd"} Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.831982 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.840449 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.874206 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.875383 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.889854 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.890585 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" containerID="cri-o://c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" gracePeriod=30 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.896855 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.940794 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.940958 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.957724 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.957998 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" containerID="cri-o://d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" gracePeriod=30 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.000553 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.000758 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" containerID="cri-o://5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" gracePeriod=30 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.000925 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" containerID="cri-o://d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" gracePeriod=30 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.042801 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.042885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.043797 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.077890 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.191552 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.192677 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.193270 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.196274 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.196361 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.279761 4995 generic.go:334] "Generic (PLEG): container finished" podID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" exitCode=143 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.279805 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerDied","Data":"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a"} Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.520717 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": read tcp 10.217.0.2:56578->10.217.0.180:9322: read: connection reset by peer" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.520723 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": read tcp 10.217.0.2:56594->10.217.0.180:9322: read: connection reset by peer" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.525247 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" path="/var/lib/kubelet/pods/1a50a8e0-765f-4f78-8204-78064fe55510/volumes" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.695702 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:02 crc kubenswrapper[4995]: W0126 23:33:02.715028 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eead2da_27a3_4ce5_9098_ac9564a6b27a.slice/crio-6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057 WatchSource:0}: Error finding container 6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057: Status 404 returned error can't find the container with id 6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057 Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.001528 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060833 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060933 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060956 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060982 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.061027 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.061111 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.065392 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs" (OuterVolumeSpecName: "logs") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.083645 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82" (OuterVolumeSpecName: "kube-api-access-7xp82") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "kube-api-access-7xp82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.100820 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.102792 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.136229 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data" (OuterVolumeSpecName: "config-data") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163660 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163695 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163708 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163716 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163983 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.224237 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.265746 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.288547 4995 generic.go:334] "Generic (PLEG): container finished" podID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" exitCode=0 Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.288603 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.288640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerDied","Data":"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.289349 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerDied","Data":"86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.289411 4995 scope.go:117] "RemoveContainer" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.290215 4995 generic.go:334] "Generic (PLEG): container finished" podID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerID="7e3ee0bb83f474f59b73fb0e9420f6ea26d6576fd1f9c21251e039a52f0471bc" exitCode=0 Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.290250 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" event={"ID":"0eead2da-27a3-4ce5-9098-ac9564a6b27a","Type":"ContainerDied","Data":"7e3ee0bb83f474f59b73fb0e9420f6ea26d6576fd1f9c21251e039a52f0471bc"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.290274 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" event={"ID":"0eead2da-27a3-4ce5-9098-ac9564a6b27a","Type":"ContainerStarted","Data":"6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.335580 4995 scope.go:117] "RemoveContainer" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.342899 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.354560 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.363537 4995 scope.go:117] "RemoveContainer" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" Jan 26 23:33:03 crc kubenswrapper[4995]: E0126 23:33:03.364770 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c\": container with ID starting with d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c not found: ID does not exist" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.364802 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c"} err="failed to get container status \"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c\": rpc error: code = NotFound desc = could not find container \"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c\": container with ID starting with d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c not found: ID does not exist" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.364822 4995 scope.go:117] "RemoveContainer" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" Jan 26 23:33:03 crc kubenswrapper[4995]: E0126 23:33:03.365189 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a\": container with ID starting with 5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a not found: ID does not exist" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.365248 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a"} err="failed to get container status \"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a\": rpc error: code = NotFound desc = could not find container \"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a\": container with ID starting with 5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a not found: ID does not exist" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.138546 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.279990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280082 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280203 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280224 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280593 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs" (OuterVolumeSpecName: "logs") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.286289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2" (OuterVolumeSpecName: "kube-api-access-8k2d2") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "kube-api-access-8k2d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301545 4995 generic.go:334] "Generic (PLEG): container finished" podID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" exitCode=0 Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerDied","Data":"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565"} Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301637 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301643 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerDied","Data":"4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656"} Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301654 4995 scope.go:117] "RemoveContainer" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.335316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.354209 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data" (OuterVolumeSpecName: "config-data") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.357190 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382058 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382095 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382118 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382130 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382140 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.409076 4995 scope.go:117] "RemoveContainer" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" Jan 26 23:33:04 crc kubenswrapper[4995]: E0126 23:33:04.409554 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565\": container with ID starting with c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565 not found: ID does not exist" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.409599 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565"} err="failed to get container status \"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565\": rpc error: code = NotFound desc = could not find container \"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565\": container with ID starting with c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565 not found: ID does not exist" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.525829 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" path="/var/lib/kubelet/pods/b459a34f-abd7-4350-8b91-c57b5124cbcf/volumes" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.628059 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.630094 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.642157 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.686594 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.686747 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.687573 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eead2da-27a3-4ce5-9098-ac9564a6b27a" (UID: "0eead2da-27a3-4ce5-9098-ac9564a6b27a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.691044 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs" (OuterVolumeSpecName: "kube-api-access-88rvs") pod "0eead2da-27a3-4ce5-9098-ac9564a6b27a" (UID: "0eead2da-27a3-4ce5-9098-ac9564a6b27a"). InnerVolumeSpecName "kube-api-access-88rvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.788688 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.788718 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.314424 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.314429 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" event={"ID":"0eead2da-27a3-4ce5-9098-ac9564a6b27a","Type":"ContainerDied","Data":"6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057"} Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.315171 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.789309 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909162 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909477 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909530 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909574 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909598 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909616 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.910296 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs" (OuterVolumeSpecName: "logs") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.910549 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.916715 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf" (OuterVolumeSpecName: "kube-api-access-gh9qf") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "kube-api-access-gh9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.933842 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.941259 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.946387 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data" (OuterVolumeSpecName: "config-data") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.979402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.982672 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040235 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040265 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040276 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040284 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040293 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141354 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141443 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141487 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141518 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141620 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141649 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141678 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141705 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.142275 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.142395 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.146217 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw" (OuterVolumeSpecName: "kube-api-access-lwthw") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "kube-api-access-lwthw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.146698 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts" (OuterVolumeSpecName: "scripts") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.165898 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.217344 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.228428 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243304 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243485 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243541 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243598 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243647 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243740 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243805 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.270087 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data" (OuterVolumeSpecName: "config-data") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325450 4995 generic.go:334] "Generic (PLEG): container finished" podID="32336662-bff8-4aca-afa4-2039d421a770" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" exitCode=0 Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerDied","Data":"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325535 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325558 4995 scope.go:117] "RemoveContainer" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325546 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerDied","Data":"9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332317 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" exitCode=0 Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332378 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332439 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"de4709385c905c889d0404b4681905a6e961420de6f40ec0154a0b2ff42a1386"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332533 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.345318 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.358431 4995 scope.go:117] "RemoveContainer" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.359050 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540\": container with ID starting with d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540 not found: ID does not exist" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.359091 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540"} err="failed to get container status \"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540\": rpc error: code = NotFound desc = could not find container \"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540\": container with ID starting with d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.359132 4995 scope.go:117] "RemoveContainer" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.390832 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.398180 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.404167 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.408517 4995 scope.go:117] "RemoveContainer" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.411996 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.421579 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422571 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.422652 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422736 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.422791 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422848 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.422899 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422970 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423023 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423090 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423166 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423227 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423282 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423342 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423397 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423500 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerName="mariadb-account-delete" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423560 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerName="mariadb-account-delete" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423621 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423674 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423923 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423997 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424064 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424151 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424210 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424259 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424311 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424355 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424401 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerName="mariadb-account-delete" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.426418 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.429472 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.430044 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.430908 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.440508 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.470210 4995 scope.go:117] "RemoveContainer" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.496400 4995 scope.go:117] "RemoveContainer" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.515941 4995 scope.go:117] "RemoveContainer" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.516414 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3\": container with ID starting with b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3 not found: ID does not exist" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516444 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3"} err="failed to get container status \"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3\": rpc error: code = NotFound desc = could not find container \"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3\": container with ID starting with b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516466 4995 scope.go:117] "RemoveContainer" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.516763 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039\": container with ID starting with fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039 not found: ID does not exist" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516812 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039"} err="failed to get container status \"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039\": rpc error: code = NotFound desc = could not find container \"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039\": container with ID starting with fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516842 4995 scope.go:117] "RemoveContainer" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.518504 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8\": container with ID starting with 25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8 not found: ID does not exist" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.518528 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8"} err="failed to get container status \"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8\": rpc error: code = NotFound desc = could not find container \"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8\": container with ID starting with 25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.518542 4995 scope.go:117] "RemoveContainer" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.518879 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd\": container with ID starting with 6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd not found: ID does not exist" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.518908 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd"} err="failed to get container status \"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd\": rpc error: code = NotFound desc = could not find container \"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd\": container with ID starting with 6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.527966 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" path="/var/lib/kubelet/pods/0d5b5d8b-4be0-469b-950f-0dbee7966330/volumes" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.528850 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32336662-bff8-4aca-afa4-2039d421a770" path="/var/lib/kubelet/pods/32336662-bff8-4aca-afa4-2039d421a770/volumes" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.529516 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" path="/var/lib/kubelet/pods/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602/volumes" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.548776 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.548829 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549028 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549111 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549223 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549379 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549423 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650845 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650881 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650935 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650959 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.651074 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.651725 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.651998 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.652055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.652091 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656321 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656331 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656706 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656805 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.657164 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.674355 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.749614 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.913655 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.926021 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.953089 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.965198 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.969144 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.973320 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.142748 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:07 crc kubenswrapper[4995]: W0126 23:33:07.156807 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ce857e_376e_4fd3_b74a_17165502ac6d.slice/crio-58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc WatchSource:0}: Error finding container 58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc: Status 404 returned error can't find the container with id 58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.341684 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc"} Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.913565 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.915639 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.928050 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.020316 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.021242 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.023153 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.029670 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.081682 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.081841 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183784 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183826 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183874 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.184551 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.204029 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.232992 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.287418 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.287719 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.288996 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.311484 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.340523 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.364308 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468"} Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.542510 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" path="/var/lib/kubelet/pods/0eead2da-27a3-4ce5-9098-ac9564a6b27a/volumes" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.543173 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73a610c-0780-46cb-9f01-09b48049748d" path="/var/lib/kubelet/pods/a73a610c-0780-46cb-9f01-09b48049748d/volumes" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.547403 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" path="/var/lib/kubelet/pods/f3461eb3-3b0d-489f-875c-bab8e4f00694/volumes" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.792864 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:08 crc kubenswrapper[4995]: W0126 23:33:08.795537 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb78169_d22d_4b1a_a51b_ad25391e10e9.slice/crio-00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd WatchSource:0}: Error finding container 00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd: Status 404 returned error can't find the container with id 00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.932448 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:08 crc kubenswrapper[4995]: W0126 23:33:08.934365 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35e92c48_e139_4a90_8601_1bd4d2937700.slice/crio-9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68 WatchSource:0}: Error finding container 9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68: Status 404 returned error can't find the container with id 9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68 Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.372314 4995 generic.go:334] "Generic (PLEG): container finished" podID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerID="1866d568d45be33fe5efec6245bd56a7ca5c85d09dddb97e98e3df586623483f" exitCode=0 Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.372364 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-65c6n" event={"ID":"0eb78169-d22d-4b1a-a51b-ad25391e10e9","Type":"ContainerDied","Data":"1866d568d45be33fe5efec6245bd56a7ca5c85d09dddb97e98e3df586623483f"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.372732 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-65c6n" event={"ID":"0eb78169-d22d-4b1a-a51b-ad25391e10e9","Type":"ContainerStarted","Data":"00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.374682 4995 generic.go:334] "Generic (PLEG): container finished" podID="35e92c48-e139-4a90-8601-1bd4d2937700" containerID="cf3bdba0bcbd9d81e57b55b762961560e4562c68a0aaacec99cefb4e736c2028" exitCode=0 Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.374755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" event={"ID":"35e92c48-e139-4a90-8601-1bd4d2937700","Type":"ContainerDied","Data":"cf3bdba0bcbd9d81e57b55b762961560e4562c68a0aaacec99cefb4e736c2028"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.374780 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" event={"ID":"35e92c48-e139-4a90-8601-1bd4d2937700","Type":"ContainerStarted","Data":"9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.377678 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.377711 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885"} Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.930279 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.940057 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976328 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"35e92c48-e139-4a90-8601-1bd4d2937700\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976614 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976747 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976846 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"35e92c48-e139-4a90-8601-1bd4d2937700\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.977741 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35e92c48-e139-4a90-8601-1bd4d2937700" (UID: "35e92c48-e139-4a90-8601-1bd4d2937700"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.981288 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eb78169-d22d-4b1a-a51b-ad25391e10e9" (UID: "0eb78169-d22d-4b1a-a51b-ad25391e10e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.981708 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l" (OuterVolumeSpecName: "kube-api-access-vbv9l") pod "35e92c48-e139-4a90-8601-1bd4d2937700" (UID: "35e92c48-e139-4a90-8601-1bd4d2937700"). InnerVolumeSpecName "kube-api-access-vbv9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.981769 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl" (OuterVolumeSpecName: "kube-api-access-76hxl") pod "0eb78169-d22d-4b1a-a51b-ad25391e10e9" (UID: "0eb78169-d22d-4b1a-a51b-ad25391e10e9"). InnerVolumeSpecName "kube-api-access-76hxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078539 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078582 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078592 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078604 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.394018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" event={"ID":"35e92c48-e139-4a90-8601-1bd4d2937700","Type":"ContainerDied","Data":"9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68"} Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.394023 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.394060 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.397021 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca"} Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.397177 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.398671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-65c6n" event={"ID":"0eb78169-d22d-4b1a-a51b-ad25391e10e9","Type":"ContainerDied","Data":"00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd"} Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.398718 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.398792 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.424267 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.114734999 podStartE2EDuration="5.424236579s" podCreationTimestamp="2026-01-26 23:33:06 +0000 UTC" firstStartedPulling="2026-01-26 23:33:07.158740275 +0000 UTC m=+1491.323447740" lastFinishedPulling="2026-01-26 23:33:10.468241855 +0000 UTC m=+1494.632949320" observedRunningTime="2026-01-26 23:33:11.417772368 +0000 UTC m=+1495.582479823" watchObservedRunningTime="2026-01-26 23:33:11.424236579 +0000 UTC m=+1495.588944044" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.238532 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:13 crc kubenswrapper[4995]: E0126 23:33:13.239066 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerName="mariadb-database-create" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239077 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerName="mariadb-database-create" Jan 26 23:33:13 crc kubenswrapper[4995]: E0126 23:33:13.239093 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" containerName="mariadb-account-create-update" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239117 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" containerName="mariadb-account-create-update" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239258 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" containerName="mariadb-account-create-update" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239268 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerName="mariadb-database-create" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239778 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.241512 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7pps7" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.241785 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.254866 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319239 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319450 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319569 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421396 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421502 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421575 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421640 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.426713 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.428576 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.436913 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.445306 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.567507 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.057971 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.421501 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerStarted","Data":"99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438"} Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.421540 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerStarted","Data":"d747f3d56270407e9ad3ec6a1c9d987864ed2e6cea6f518a69edbdd0f6c50044"} Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.444306 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" podStartSLOduration=1.444289564 podStartE2EDuration="1.444289564s" podCreationTimestamp="2026-01-26 23:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:14.440593421 +0000 UTC m=+1498.605300876" watchObservedRunningTime="2026-01-26 23:33:14.444289564 +0000 UTC m=+1498.608997029" Jan 26 23:33:16 crc kubenswrapper[4995]: I0126 23:33:16.442399 4995 generic.go:334] "Generic (PLEG): container finished" podID="64055a76-6d73-45e6-8c44-424f42362b20" containerID="99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438" exitCode=0 Jan 26 23:33:16 crc kubenswrapper[4995]: I0126 23:33:16.442659 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerDied","Data":"99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438"} Jan 26 23:33:17 crc kubenswrapper[4995]: I0126 23:33:17.934597 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038516 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038611 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038663 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038701 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.044218 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw" (OuterVolumeSpecName: "kube-api-access-zgblw") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "kube-api-access-zgblw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.044410 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.059549 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.094229 4995 scope.go:117] "RemoveContainer" containerID="314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.112202 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data" (OuterVolumeSpecName: "config-data") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140841 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140899 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140920 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140940 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.194614 4995 scope.go:117] "RemoveContainer" containerID="7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.461970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerDied","Data":"d747f3d56270407e9ad3ec6a1c9d987864ed2e6cea6f518a69edbdd0f6c50044"} Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.462007 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d747f3d56270407e9ad3ec6a1c9d987864ed2e6cea6f518a69edbdd0f6c50044" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.462033 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.745066 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: E0126 23:33:18.745785 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64055a76-6d73-45e6-8c44-424f42362b20" containerName="watcher-kuttl-db-sync" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.745807 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="64055a76-6d73-45e6-8c44-424f42362b20" containerName="watcher-kuttl-db-sync" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.745994 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="64055a76-6d73-45e6-8c44-424f42362b20" containerName="watcher-kuttl-db-sync" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.746987 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.748924 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749051 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749166 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749272 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749314 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749436 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.751441 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7pps7" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.753656 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.758554 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.824667 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.825654 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.828807 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.836833 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850201 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850418 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850511 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850590 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850727 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850865 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.851040 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.851149 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.851316 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.855740 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.856169 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.856682 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.859404 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.862198 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.882241 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.883894 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.885663 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.887490 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.892643 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952083 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952191 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952237 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952284 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952305 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952441 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952519 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952538 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952568 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952705 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952708 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.955738 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.956198 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.956304 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.967062 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053684 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053726 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053769 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053818 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053864 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.054634 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.056736 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.057607 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.057890 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.058283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.068356 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.074605 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.153748 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.249142 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.514301 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.524897 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:19 crc kubenswrapper[4995]: W0126 23:33:19.541435 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2153945e_4846_45d3_8e7c_dfaff880bbc8.slice/crio-381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74 WatchSource:0}: Error finding container 381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74: Status 404 returned error can't find the container with id 381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74 Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.814769 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:19 crc kubenswrapper[4995]: W0126 23:33:19.820501 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b2c055_90b0_4ee2_8155_9d7a63e5a8ac.slice/crio-8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a WatchSource:0}: Error finding container 8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a: Status 404 returned error can't find the container with id 8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.482425 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerStarted","Data":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.482476 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerStarted","Data":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.482488 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerStarted","Data":"4d3284c898b59faa984bdb5db96098bca7f16dd71bef193b7303ce861694df97"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.483889 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.486391 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerStarted","Data":"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.486424 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerStarted","Data":"8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.488457 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerStarted","Data":"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.488502 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerStarted","Data":"381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.529820 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.529801537 podStartE2EDuration="2.529801537s" podCreationTimestamp="2026-01-26 23:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:20.524817002 +0000 UTC m=+1504.689524467" watchObservedRunningTime="2026-01-26 23:33:20.529801537 +0000 UTC m=+1504.694509002" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.559304 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.559285745 podStartE2EDuration="2.559285745s" podCreationTimestamp="2026-01-26 23:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:20.548070345 +0000 UTC m=+1504.712777810" watchObservedRunningTime="2026-01-26 23:33:20.559285745 +0000 UTC m=+1504.723993210" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.577706 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.577688656 podStartE2EDuration="2.577688656s" podCreationTimestamp="2026-01-26 23:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:20.569424239 +0000 UTC m=+1504.734131694" watchObservedRunningTime="2026-01-26 23:33:20.577688656 +0000 UTC m=+1504.742396121" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.786626 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:21 crc kubenswrapper[4995]: I0126 23:33:21.978689 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:22 crc kubenswrapper[4995]: I0126 23:33:22.517605 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:33:22 crc kubenswrapper[4995]: I0126 23:33:22.772644 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:23 crc kubenswrapper[4995]: I0126 23:33:23.208984 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:24 crc kubenswrapper[4995]: I0126 23:33:24.069179 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:24 crc kubenswrapper[4995]: I0126 23:33:24.154532 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:24 crc kubenswrapper[4995]: I0126 23:33:24.407909 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:25 crc kubenswrapper[4995]: I0126 23:33:25.618425 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:26 crc kubenswrapper[4995]: I0126 23:33:26.801811 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:27 crc kubenswrapper[4995]: I0126 23:33:27.990880 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.068775 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.084780 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.154743 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.180112 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.211463 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.250333 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.286273 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.579018 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.583672 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.607811 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.622332 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.399230 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.599870 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.611019 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.639703 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.641039 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.647860 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.708320 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.740193 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.745822 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.746352 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747517 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747602 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data podName:5cca5bb3-8e8f-412e-a5a7-b0b072f72500 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:31.247582571 +0000 UTC m=+1515.412290096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data") pod "watcher-kuttl-api-0" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747886 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747921 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:31.247912859 +0000 UTC m=+1515.412620324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.780168 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.847967 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.848023 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.849037 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.870828 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.960402 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262296 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262572 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data podName:5cca5bb3-8e8f-412e-a5a7-b0b072f72500 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:32.262558524 +0000 UTC m=+1516.427265989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data") pod "watcher-kuttl-api-0" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262869 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262898 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:32.262890453 +0000 UTC m=+1516.427597918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.418455 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596054 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" event={"ID":"18d06905-621f-4fcd-96a9-a3da780dbf9f","Type":"ContainerStarted","Data":"694599d83d729f31d133d0ca0d751152908c5ca0a1daa099453eccc3981ddd91"} Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596193 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" containerID="cri-o://e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" gracePeriod=30 Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596414 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" containerID="cri-o://33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" gracePeriod=30 Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596572 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" containerID="cri-o://1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" gracePeriod=30 Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.597047 4995 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-7pps7\" not found" Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.770853 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.770947 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:32.270924723 +0000 UTC m=+1516.435632218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.278938 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279469 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:33.279447795 +0000 UTC m=+1517.444155260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279230 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279593 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:34.279568468 +0000 UTC m=+1518.444276003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279279 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279629 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data podName:5cca5bb3-8e8f-412e-a5a7-b0b072f72500 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:34.279621409 +0000 UTC m=+1518.444329004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data") pod "watcher-kuttl-api-0" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.510777 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.530788 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64055a76-6d73-45e6-8c44-424f42362b20" path="/var/lib/kubelet/pods/64055a76-6d73-45e6-8c44-424f42362b20/volumes" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603779 4995 generic.go:334] "Generic (PLEG): container finished" podID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" exitCode=0 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603816 4995 generic.go:334] "Generic (PLEG): container finished" podID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" exitCode=143 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603829 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603879 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerDied","Data":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603921 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerDied","Data":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603937 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerDied","Data":"4d3284c898b59faa984bdb5db96098bca7f16dd71bef193b7303ce861694df97"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603974 4995 scope.go:117] "RemoveContainer" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.607289 4995 generic.go:334] "Generic (PLEG): container finished" podID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerID="bbc420fd12fe1d211845fe7f68211386fea3f13c2e6223073fc5536f18ea16a2" exitCode=0 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.607341 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" event={"ID":"18d06905-621f-4fcd-96a9-a3da780dbf9f","Type":"ContainerDied","Data":"bbc420fd12fe1d211845fe7f68211386fea3f13c2e6223073fc5536f18ea16a2"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.607438 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" containerID="cri-o://7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" gracePeriod=30 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.633882 4995 scope.go:117] "RemoveContainer" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.649984 4995 scope.go:117] "RemoveContainer" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.650364 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": container with ID starting with 1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598 not found: ID does not exist" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650391 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} err="failed to get container status \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": rpc error: code = NotFound desc = could not find container \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": container with ID starting with 1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598 not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650411 4995 scope.go:117] "RemoveContainer" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.650579 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": container with ID starting with e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e not found: ID does not exist" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650602 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} err="failed to get container status \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": rpc error: code = NotFound desc = could not find container \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": container with ID starting with e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650616 4995 scope.go:117] "RemoveContainer" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650776 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} err="failed to get container status \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": rpc error: code = NotFound desc = could not find container \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": container with ID starting with 1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598 not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650794 4995 scope.go:117] "RemoveContainer" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.651004 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} err="failed to get container status \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": rpc error: code = NotFound desc = could not find container \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": container with ID starting with e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685660 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685714 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685770 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685936 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685985 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.686031 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.686715 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs" (OuterVolumeSpecName: "logs") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.709194 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45" (OuterVolumeSpecName: "kube-api-access-ksq45") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "kube-api-access-ksq45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.713650 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.714004 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.741617 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data" (OuterVolumeSpecName: "config-data") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.767638 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787594 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787627 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787637 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787646 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787656 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787666 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.961752 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.970274 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:33 crc kubenswrapper[4995]: E0126 23:33:33.294878 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:33 crc kubenswrapper[4995]: E0126 23:33:33.295170 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:35.295153294 +0000 UTC m=+1519.459860759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.020214 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.114520 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"18d06905-621f-4fcd-96a9-a3da780dbf9f\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.114619 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"18d06905-621f-4fcd-96a9-a3da780dbf9f\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.115729 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18d06905-621f-4fcd-96a9-a3da780dbf9f" (UID: "18d06905-621f-4fcd-96a9-a3da780dbf9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.119402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr" (OuterVolumeSpecName: "kube-api-access-g8fzr") pod "18d06905-621f-4fcd-96a9-a3da780dbf9f" (UID: "18d06905-621f-4fcd-96a9-a3da780dbf9f"). InnerVolumeSpecName "kube-api-access-g8fzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.159966 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.161617 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.163765 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.163792 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.215961 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.216002 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.317928 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.318011 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:38.317994034 +0000 UTC m=+1522.482701509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.532628 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" path="/var/lib/kubelet/pods/5cca5bb3-8e8f-412e-a5a7-b0b072f72500/volumes" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.634520 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" event={"ID":"18d06905-621f-4fcd-96a9-a3da780dbf9f","Type":"ContainerDied","Data":"694599d83d729f31d133d0ca0d751152908c5ca0a1daa099453eccc3981ddd91"} Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.634577 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694599d83d729f31d133d0ca0d751152908c5ca0a1daa099453eccc3981ddd91" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.634539 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.338372 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.338521 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:39.338486274 +0000 UTC m=+1523.503193779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.697225 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.712249 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.722542 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.732496 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.741465 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.747814 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.853860 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.854633 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerName="mariadb-account-delete" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854651 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerName="mariadb-account-delete" Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.854680 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854687 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.854698 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854704 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854863 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerName="mariadb-account-delete" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854889 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854902 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.855462 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.914942 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.950462 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.950582 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.950641 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.951589 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.954942 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.967934 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052178 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052207 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052237 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052954 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.077053 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.153552 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.153622 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.154794 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.172694 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.176296 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.280442 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.610901 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.611667 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" path="/var/lib/kubelet/pods/0eb78169-d22d-4b1a-a51b-ad25391e10e9/volumes" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.613041 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" path="/var/lib/kubelet/pods/18d06905-621f-4fcd-96a9-a3da780dbf9f/volumes" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.615679 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" path="/var/lib/kubelet/pods/35e92c48-e139-4a90-8601-1bd4d2937700/volumes" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.617412 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:36 crc kubenswrapper[4995]: E0126 23:33:36.617716 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.617731 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.618150 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.619726 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.625730 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668384 4995 generic.go:334] "Generic (PLEG): container finished" podID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" exitCode=0 Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668431 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerDied","Data":"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02"} Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668450 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668471 4995 scope.go:117] "RemoveContainer" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668459 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerDied","Data":"381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74"} Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.708414 4995 scope.go:117] "RemoveContainer" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" Jan 26 23:33:36 crc kubenswrapper[4995]: E0126 23:33:36.709035 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02\": container with ID starting with 33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02 not found: ID does not exist" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.709062 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02"} err="failed to get container status \"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02\": rpc error: code = NotFound desc = could not find container \"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02\": container with ID starting with 33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02 not found: ID does not exist" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.711145 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.711250 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.711439 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.776502 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812237 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812280 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812303 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812340 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812553 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812642 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812667 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812825 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs" (OuterVolumeSpecName: "logs") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.813939 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.817076 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.819827 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f" (OuterVolumeSpecName: "kube-api-access-9tl8f") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "kube-api-access-9tl8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.854866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.857706 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.869540 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.914862 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.914906 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.914920 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.916227 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data" (OuterVolumeSpecName: "config-data") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.923289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.956131 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.962032 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.016565 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.017020 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.016622 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.031141 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.478344 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629074 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629172 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629220 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629274 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629312 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629368 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.630761 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs" (OuterVolumeSpecName: "logs") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.640501 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.656137 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z" (OuterVolumeSpecName: "kube-api-access-ksj9z") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "kube-api-access-ksj9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.668273 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.695388 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data" (OuterVolumeSpecName: "config-data") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.701134 4995 generic.go:334] "Generic (PLEG): container finished" podID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerID="4af14df6baf5e2d7f5d921b037ff739c3922a94531b7d54b66151b9b3794fdee" exitCode=0 Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.701211 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-c64b2" event={"ID":"ca2f73d1-0380-4fcf-9fde-35f821426fed","Type":"ContainerDied","Data":"4af14df6baf5e2d7f5d921b037ff739c3922a94531b7d54b66151b9b3794fdee"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.701247 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-c64b2" event={"ID":"ca2f73d1-0380-4fcf-9fde-35f821426fed","Type":"ContainerStarted","Data":"f0828b9c5c98a0cabed1aba541dbb0ad7c039d4728621a1138fa826256600b06"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.705005 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerStarted","Data":"145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.705060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerStarted","Data":"bcbdb30b1631e4cc7ba330a7b5bac0ee6f3b18ba19f15b08d579b5924d2c1362"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707154 4995 generic.go:334] "Generic (PLEG): container finished" podID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" exitCode=0 Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707201 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerDied","Data":"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707232 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerDied","Data":"8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707255 4995 scope.go:117] "RemoveContainer" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707368 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.720389 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.732427 4995 scope.go:117] "RemoveContainer" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733678 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733785 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733878 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733957 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.734041 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: E0126 23:33:37.733843 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565\": container with ID starting with 7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565 not found: ID does not exist" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.734236 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565"} err="failed to get container status \"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565\": rpc error: code = NotFound desc = could not find container \"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565\": container with ID starting with 7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565 not found: ID does not exist" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.752215 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.835976 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.034329 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" podStartSLOduration=3.03431199 podStartE2EDuration="3.03431199s" podCreationTimestamp="2026-01-26 23:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:37.744178765 +0000 UTC m=+1521.908886230" watchObservedRunningTime="2026-01-26 23:33:38.03431199 +0000 UTC m=+1522.199019455" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.039405 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.049536 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.530205 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" path="/var/lib/kubelet/pods/2153945e-4846-45d3-8e7c-dfaff880bbc8/volumes" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.531066 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" path="/var/lib/kubelet/pods/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/volumes" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.716297 4995 generic.go:334] "Generic (PLEG): container finished" podID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" exitCode=0 Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.716853 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68"} Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.717780 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerStarted","Data":"ad6e8d03448bf620648f7ff3a9ff045c078111f4db8adc6377c026512acd50ae"} Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.722486 4995 generic.go:334] "Generic (PLEG): container finished" podID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerID="145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1" exitCode=0 Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.722542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerDied","Data":"145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1"} Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.159833 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.262044 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"ca2f73d1-0380-4fcf-9fde-35f821426fed\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.262210 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"ca2f73d1-0380-4fcf-9fde-35f821426fed\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.262962 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca2f73d1-0380-4fcf-9fde-35f821426fed" (UID: "ca2f73d1-0380-4fcf-9fde-35f821426fed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.267756 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8" (OuterVolumeSpecName: "kube-api-access-jj9d8") pod "ca2f73d1-0380-4fcf-9fde-35f821426fed" (UID: "ca2f73d1-0380-4fcf-9fde-35f821426fed"). InnerVolumeSpecName "kube-api-access-jj9d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.363783 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.364025 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.732599 4995 generic.go:334] "Generic (PLEG): container finished" podID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" exitCode=0 Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.732852 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf"} Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.736308 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.736808 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-c64b2" event={"ID":"ca2f73d1-0380-4fcf-9fde-35f821426fed","Type":"ContainerDied","Data":"f0828b9c5c98a0cabed1aba541dbb0ad7c039d4728621a1138fa826256600b06"} Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.736943 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0828b9c5c98a0cabed1aba541dbb0ad7c039d4728621a1138fa826256600b06" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.082385 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175369 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175522 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175786 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "599bdb97-9d21-44b9-9a59-84320b1c4a6e" (UID: "599bdb97-9d21-44b9-9a59-84320b1c4a6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175882 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.179760 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6" (OuterVolumeSpecName: "kube-api-access-5w9s6") pod "599bdb97-9d21-44b9-9a59-84320b1c4a6e" (UID: "599bdb97-9d21-44b9-9a59-84320b1c4a6e"). InnerVolumeSpecName "kube-api-access-5w9s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.277373 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.749964 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerDied","Data":"bcbdb30b1631e4cc7ba330a7b5bac0ee6f3b18ba19f15b08d579b5924d2c1362"} Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.750011 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcbdb30b1631e4cc7ba330a7b5bac0ee6f3b18ba19f15b08d579b5924d2c1362" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.750020 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.752940 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerStarted","Data":"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0"} Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.781048 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n58tq" podStartSLOduration=3.344474552 podStartE2EDuration="4.781022049s" podCreationTimestamp="2026-01-26 23:33:36 +0000 UTC" firstStartedPulling="2026-01-26 23:33:38.718625823 +0000 UTC m=+1522.883333298" lastFinishedPulling="2026-01-26 23:33:40.1551733 +0000 UTC m=+1524.319880795" observedRunningTime="2026-01-26 23:33:40.776426344 +0000 UTC m=+1524.941133819" watchObservedRunningTime="2026-01-26 23:33:40.781022049 +0000 UTC m=+1524.945729514" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.345063 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:33:46 crc kubenswrapper[4995]: E0126 23:33:46.346124 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerName="mariadb-database-create" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346140 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerName="mariadb-database-create" Jan 26 23:33:46 crc kubenswrapper[4995]: E0126 23:33:46.346161 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerName="mariadb-account-create-update" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346169 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerName="mariadb-account-create-update" Jan 26 23:33:46 crc kubenswrapper[4995]: E0126 23:33:46.346186 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346195 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346374 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346387 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerName="mariadb-account-create-update" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346402 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerName="mariadb-database-create" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346993 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.349128 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.352210 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.352927 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jcwzq" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478420 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478740 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478775 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478821 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.580317 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.580872 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.582071 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.582172 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.587817 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.588112 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.588678 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.604071 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.669420 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.957357 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.957692 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.006238 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.204568 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.821319 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerStarted","Data":"2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3"} Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.823078 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerStarted","Data":"a9f51607f45c39b4bbc7c12c507945758f52823611d78a21f56d37cba1b237c8"} Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.841336 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" podStartSLOduration=1.8413055790000001 podStartE2EDuration="1.841305579s" podCreationTimestamp="2026-01-26 23:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:47.839405841 +0000 UTC m=+1532.004113306" watchObservedRunningTime="2026-01-26 23:33:47.841305579 +0000 UTC m=+1532.006013044" Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.869429 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:49 crc kubenswrapper[4995]: I0126 23:33:49.838437 4995 generic.go:334] "Generic (PLEG): container finished" podID="9b1297fe-4233-44a3-864c-2564bef1017f" containerID="2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3" exitCode=0 Jan 26 23:33:49 crc kubenswrapper[4995]: I0126 23:33:49.838513 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerDied","Data":"2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3"} Jan 26 23:33:50 crc kubenswrapper[4995]: I0126 23:33:50.536690 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:50 crc kubenswrapper[4995]: I0126 23:33:50.887986 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n58tq" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" containerID="cri-o://55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" gracePeriod=2 Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.358898 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367272 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367335 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367446 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367486 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.377623 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.377696 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc" (OuterVolumeSpecName: "kube-api-access-dg9wc") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "kube-api-access-dg9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.406355 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.435448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data" (OuterVolumeSpecName: "config-data") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.468975 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.469005 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.469020 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.469031 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.470718 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.569499 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"08dc9823-c5ed-451c-a202-312e9b4cd254\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.569972 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"08dc9823-c5ed-451c-a202-312e9b4cd254\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.570016 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"08dc9823-c5ed-451c-a202-312e9b4cd254\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.582238 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities" (OuterVolumeSpecName: "utilities") pod "08dc9823-c5ed-451c-a202-312e9b4cd254" (UID: "08dc9823-c5ed-451c-a202-312e9b4cd254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.582969 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n" (OuterVolumeSpecName: "kube-api-access-njx4n") pod "08dc9823-c5ed-451c-a202-312e9b4cd254" (UID: "08dc9823-c5ed-451c-a202-312e9b4cd254"). InnerVolumeSpecName "kube-api-access-njx4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.591940 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08dc9823-c5ed-451c-a202-312e9b4cd254" (UID: "08dc9823-c5ed-451c-a202-312e9b4cd254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.671344 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.671409 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.671423 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.899286 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerDied","Data":"a9f51607f45c39b4bbc7c12c507945758f52823611d78a21f56d37cba1b237c8"} Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.899330 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f51607f45c39b4bbc7c12c507945758f52823611d78a21f56d37cba1b237c8" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.899442 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902726 4995 generic.go:334] "Generic (PLEG): container finished" podID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" exitCode=0 Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902752 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0"} Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902769 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"ad6e8d03448bf620648f7ff3a9ff045c078111f4db8adc6377c026512acd50ae"} Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902787 4995 scope.go:117] "RemoveContainer" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902854 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.932296 4995 scope.go:117] "RemoveContainer" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.969087 4995 scope.go:117] "RemoveContainer" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.976504 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.004606 4995 scope.go:117] "RemoveContainer" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.005058 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0\": container with ID starting with 55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0 not found: ID does not exist" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005144 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0"} err="failed to get container status \"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0\": rpc error: code = NotFound desc = could not find container \"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0\": container with ID starting with 55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0 not found: ID does not exist" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005176 4995 scope.go:117] "RemoveContainer" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.005493 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf\": container with ID starting with 629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf not found: ID does not exist" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005527 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf"} err="failed to get container status \"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf\": rpc error: code = NotFound desc = could not find container \"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf\": container with ID starting with 629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf not found: ID does not exist" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005548 4995 scope.go:117] "RemoveContainer" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.005782 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68\": container with ID starting with 62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68 not found: ID does not exist" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005802 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68"} err="failed to get container status \"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68\": rpc error: code = NotFound desc = could not find container \"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68\": container with ID starting with 62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68 not found: ID does not exist" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005907 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.104908 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105262 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105274 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105299 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-content" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105305 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-content" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105314 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" containerName="watcher-kuttl-db-sync" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105320 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" containerName="watcher-kuttl-db-sync" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105336 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-utilities" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105341 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-utilities" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105467 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105481 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" containerName="watcher-kuttl-db-sync" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.106038 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.108138 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.112293 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jcwzq" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.115594 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177725 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177774 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177813 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177851 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177869 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177892 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.211382 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.212921 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.214836 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.232049 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.234638 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.239965 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.257209 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.258463 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.261378 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.267853 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.275852 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280150 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280197 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280237 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280257 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280280 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280300 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280316 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280354 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280379 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280399 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280421 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280447 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280470 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280491 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280527 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280544 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280573 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280595 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280619 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280639 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280671 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280697 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.284693 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.286368 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.288603 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.288792 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.293216 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.302925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383166 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383285 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383332 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383380 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383412 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383462 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383484 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383525 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383565 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383618 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383659 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383711 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383775 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383803 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383857 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383893 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383915 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.384542 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.384610 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.385280 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.387364 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.387467 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.387613 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.390699 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.390989 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.391124 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.391396 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.393258 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.394059 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.394460 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.394829 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.402281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.405373 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.409169 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.444543 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.530160 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" path="/var/lib/kubelet/pods/08dc9823-c5ed-451c-a202-312e9b4cd254/volumes" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.531914 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.560930 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.578418 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.885498 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: W0126 23:33:52.889153 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4eea70_3af8_412b_8a7f_8abda2350f7a.slice/crio-cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b WatchSource:0}: Error finding container cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b: Status 404 returned error can't find the container with id cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.921613 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerStarted","Data":"cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.059780 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.113175 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.125687 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:53 crc kubenswrapper[4995]: W0126 23:33:53.160063 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd4c263_1050_4645_a224_8e1f758e4495.slice/crio-dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e WatchSource:0}: Error finding container dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e: Status 404 returned error can't find the container with id dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931036 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerStarted","Data":"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931366 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerStarted","Data":"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931401 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931412 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerStarted","Data":"3a6cd3187438302097e41f8997601ad4ed78776668fbf86e5b5c905b7d06906d"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.932840 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerStarted","Data":"00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.932881 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerStarted","Data":"a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.932894 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerStarted","Data":"2b794c5d957bd49aebae6d5afbe71ffae4c423dc061005fe477c13b7c05312fd"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.935673 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerStarted","Data":"7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.937420 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerStarted","Data":"d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.937446 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerStarted","Data":"dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.973652 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.973631935 podStartE2EDuration="1.973631935s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:53.970439625 +0000 UTC m=+1538.135147090" watchObservedRunningTime="2026-01-26 23:33:53.973631935 +0000 UTC m=+1538.138339410" Jan 26 23:33:54 crc kubenswrapper[4995]: I0126 23:33:54.025905 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.025883053 podStartE2EDuration="2.025883053s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:54.015811421 +0000 UTC m=+1538.180518886" watchObservedRunningTime="2026-01-26 23:33:54.025883053 +0000 UTC m=+1538.190590518" Jan 26 23:33:54 crc kubenswrapper[4995]: I0126 23:33:54.109330 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.109314332 podStartE2EDuration="2.109314332s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:54.065192258 +0000 UTC m=+1538.229899723" watchObservedRunningTime="2026-01-26 23:33:54.109314332 +0000 UTC m=+1538.274021797" Jan 26 23:33:54 crc kubenswrapper[4995]: I0126 23:33:54.946504 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:56 crc kubenswrapper[4995]: I0126 23:33:56.152589 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:56 crc kubenswrapper[4995]: I0126 23:33:56.189844 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=4.189813682 podStartE2EDuration="4.189813682s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:54.1084173 +0000 UTC m=+1538.273124755" watchObservedRunningTime="2026-01-26 23:33:56.189813682 +0000 UTC m=+1540.354521147" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.119448 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.533401 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.562306 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.579111 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.444988 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.497873 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.537330 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.546990 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.563055 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.579838 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.580707 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.625777 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.031356 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.047613 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.054631 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.069834 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.070671 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.532713 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533270 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" containerID="cri-o://c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" gracePeriod=30 Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533336 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" containerID="cri-o://932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" gracePeriod=30 Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533315 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" containerID="cri-o://3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" gracePeriod=30 Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533351 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" containerID="cri-o://3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" gracePeriod=30 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055477 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" exitCode=0 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055508 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" exitCode=2 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055515 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" exitCode=0 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055533 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca"} Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055558 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427"} Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468"} Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.700087 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766686 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766777 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766840 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766927 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766960 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.767121 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.767194 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.767254 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.775448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.776184 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.778947 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm" (OuterVolumeSpecName: "kube-api-access-wfszm") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "kube-api-access-wfszm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.783431 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts" (OuterVolumeSpecName: "scripts") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.816793 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.832139 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.866574 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.868671 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data" (OuterVolumeSpecName: "config-data") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869622 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869649 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869662 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869670 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869678 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869686 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869693 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869701 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067155 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" exitCode=0 Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067224 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067250 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885"} Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067302 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc"} Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067379 4995 scope.go:117] "RemoveContainer" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.091558 4995 scope.go:117] "RemoveContainer" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.107022 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.113711 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.119481 4995 scope.go:117] "RemoveContainer" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.141436 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.141990 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142014 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.142032 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142040 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.142057 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142068 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.142081 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142087 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142317 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142334 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142346 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142361 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.145314 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.149846 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.149997 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.150021 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.158244 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.170792 4995 scope.go:117] "RemoveContainer" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177371 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177460 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177504 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177538 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177989 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.178040 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.178080 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.178882 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.204643 4995 scope.go:117] "RemoveContainer" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.216637 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca\": container with ID starting with 932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca not found: ID does not exist" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.216712 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca"} err="failed to get container status \"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca\": rpc error: code = NotFound desc = could not find container \"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca\": container with ID starting with 932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.216825 4995 scope.go:117] "RemoveContainer" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.217593 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427\": container with ID starting with 3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427 not found: ID does not exist" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.217717 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427"} err="failed to get container status \"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427\": rpc error: code = NotFound desc = could not find container \"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427\": container with ID starting with 3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427 not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.217751 4995 scope.go:117] "RemoveContainer" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.218586 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885\": container with ID starting with 3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885 not found: ID does not exist" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.218636 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885"} err="failed to get container status \"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885\": rpc error: code = NotFound desc = could not find container \"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885\": container with ID starting with 3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885 not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.218666 4995 scope.go:117] "RemoveContainer" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.218958 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468\": container with ID starting with c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468 not found: ID does not exist" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.219002 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468"} err="failed to get container status \"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468\": rpc error: code = NotFound desc = could not find container \"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468\": container with ID starting with c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468 not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280530 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280599 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280661 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280708 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280750 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280781 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280808 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280968 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.281756 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.290648 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.291086 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.291428 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.292224 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.293209 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.303835 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.463051 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.997459 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:08 crc kubenswrapper[4995]: W0126 23:34:08.001876 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3386474_d50c_4dcf_b6b5_9aae87610ee5.slice/crio-464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62 WatchSource:0}: Error finding container 464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62: Status 404 returned error can't find the container with id 464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62 Jan 26 23:34:08 crc kubenswrapper[4995]: I0126 23:34:08.077413 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62"} Jan 26 23:34:08 crc kubenswrapper[4995]: I0126 23:34:08.529012 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" path="/var/lib/kubelet/pods/d3ce857e-376e-4fd3-b74a-17165502ac6d/volumes" Jan 26 23:34:09 crc kubenswrapper[4995]: I0126 23:34:09.085639 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda"} Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.099244 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e"} Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.641347 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.653223 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.699506 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838117 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838173 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838198 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838490 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838584 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.940069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.940514 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941658 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941705 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941726 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941790 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.942644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.944802 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.946429 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.947786 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.947816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.958868 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:11 crc kubenswrapper[4995]: I0126 23:34:11.014191 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:11 crc kubenswrapper[4995]: I0126 23:34:11.111964 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1"} Jan 26 23:34:11 crc kubenswrapper[4995]: W0126 23:34:11.497406 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2648dc76_5b29_4f04_817d_f0fdd488f830.slice/crio-49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d WatchSource:0}: Error finding container 49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d: Status 404 returned error can't find the container with id 49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d Jan 26 23:34:11 crc kubenswrapper[4995]: I0126 23:34:11.524910 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.124451 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.125233 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.126806 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerStarted","Data":"1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.126830 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerStarted","Data":"e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.126840 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerStarted","Data":"49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.127427 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.129841 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.200:9322/\": dial tcp 10.217.0.200:9322: connect: connection refused" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.151693 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.595581165 podStartE2EDuration="5.15167577s" podCreationTimestamp="2026-01-26 23:34:07 +0000 UTC" firstStartedPulling="2026-01-26 23:34:08.004397375 +0000 UTC m=+1552.169104840" lastFinishedPulling="2026-01-26 23:34:11.56049197 +0000 UTC m=+1555.725199445" observedRunningTime="2026-01-26 23:34:12.151049845 +0000 UTC m=+1556.315757320" watchObservedRunningTime="2026-01-26 23:34:12.15167577 +0000 UTC m=+1556.316383235" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.185655 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=2.18562014 podStartE2EDuration="2.18562014s" podCreationTimestamp="2026-01-26 23:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:12.182780849 +0000 UTC m=+1556.347488314" watchObservedRunningTime="2026-01-26 23:34:12.18562014 +0000 UTC m=+1556.350327605" Jan 26 23:34:15 crc kubenswrapper[4995]: I0126 23:34:15.470144 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:16 crc kubenswrapper[4995]: I0126 23:34:16.014816 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:18 crc kubenswrapper[4995]: I0126 23:34:18.431426 4995 scope.go:117] "RemoveContainer" containerID="5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964" Jan 26 23:34:18 crc kubenswrapper[4995]: I0126 23:34:18.466850 4995 scope.go:117] "RemoveContainer" containerID="21fc0623b802d82a641a134593a6142947f12ed59ae9a3e0731b353104bba872" Jan 26 23:34:21 crc kubenswrapper[4995]: I0126 23:34:21.014990 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:21 crc kubenswrapper[4995]: I0126 23:34:21.026027 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:21 crc kubenswrapper[4995]: I0126 23:34:21.233848 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.323749 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.374849 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.375207 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" containerID="cri-o://a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533" gracePeriod=30 Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.375798 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" containerID="cri-o://00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4" gracePeriod=30 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.181977 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.197:9322/\": read tcp 10.217.0.2:42618->10.217.0.197:9322: read: connection reset by peer" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.182003 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9322/\": read tcp 10.217.0.2:42620->10.217.0.197:9322: read: connection reset by peer" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255150 4995 generic.go:334] "Generic (PLEG): container finished" podID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerID="00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4" exitCode=0 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255198 4995 generic.go:334] "Generic (PLEG): container finished" podID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerID="a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533" exitCode=143 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255233 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerDied","Data":"00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4"} Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255289 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerDied","Data":"a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533"} Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255477 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" containerID="cri-o://e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d" gracePeriod=30 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255631 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" containerID="cri-o://1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0" gracePeriod=30 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.611461 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688377 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688475 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688507 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688555 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688603 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688621 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.690424 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs" (OuterVolumeSpecName: "logs") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.701313 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl" (OuterVolumeSpecName: "kube-api-access-mlpcl") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "kube-api-access-mlpcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.718363 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.774469 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.780391 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data" (OuterVolumeSpecName: "config-data") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790470 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790501 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790513 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790523 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790531 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.809967 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.892069 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.269786 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerDied","Data":"2b794c5d957bd49aebae6d5afbe71ffae4c423dc061005fe477c13b7c05312fd"} Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.269837 4995 scope.go:117] "RemoveContainer" containerID="00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.269965 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275069 4995 generic.go:334] "Generic (PLEG): container finished" podID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerID="1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0" exitCode=0 Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275122 4995 generic.go:334] "Generic (PLEG): container finished" podID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerID="e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d" exitCode=143 Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275125 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerDied","Data":"1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0"} Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275176 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerDied","Data":"e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d"} Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.314717 4995 scope.go:117] "RemoveContainer" containerID="a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.319639 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.331561 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.519025 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.534388 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" path="/var/lib/kubelet/pods/8cfc8556-a69e-418c-b52e-de4b1baa474f/volumes" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615479 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615524 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615566 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615666 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615693 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615727 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.616180 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs" (OuterVolumeSpecName: "logs") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.620315 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q" (OuterVolumeSpecName: "kube-api-access-xwr7q") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "kube-api-access-xwr7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.646367 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.648865 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.655310 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data" (OuterVolumeSpecName: "config-data") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.678339 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717071 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717136 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717173 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717182 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717192 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717200 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.292707 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerDied","Data":"49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d"} Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.292775 4995 scope.go:117] "RemoveContainer" containerID="1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.292953 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.334830 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.336092 4995 scope.go:117] "RemoveContainer" containerID="e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.346141 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.538675 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" path="/var/lib/kubelet/pods/2648dc76-5b29-4f04-817d-f0fdd488f830/volumes" Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.579512 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.579785 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" containerID="cri-o://697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" gracePeriod=30 Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.580294 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" containerID="cri-o://92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" gracePeriod=30 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.321282 4995 generic.go:334] "Generic (PLEG): container finished" podID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" exitCode=143 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.321330 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerDied","Data":"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f"} Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.532740 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.196:9322/\": dial tcp 10.217.0.196:9322: connect: connection refused" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.532764 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.196:9322/\": dial tcp 10.217.0.196:9322: connect: connection refused" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.777910 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.788502 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817564 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817874 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817886 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817894 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817901 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817915 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817922 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817936 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817941 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818128 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818148 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818158 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818165 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818684 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.848747 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.889285 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.889682 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" containerID="cri-o://d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55" gracePeriod=30 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.912841 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.913158 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" containerID="cri-o://7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8" gracePeriod=30 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.981871 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.981938 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.006620 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.083158 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.083863 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.085032 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.104719 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.149093 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186020 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186201 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186299 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186802 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186909 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.187093 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186614 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs" (OuterVolumeSpecName: "logs") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.203264 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn" (OuterVolumeSpecName: "kube-api-access-9hvxn") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "kube-api-access-9hvxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.220405 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.231406 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.251500 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data" (OuterVolumeSpecName: "config-data") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.286388 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288574 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288594 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288604 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288612 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288620 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288628 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337700 4995 generic.go:334] "Generic (PLEG): container finished" podID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" exitCode=0 Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337744 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerDied","Data":"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b"} Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337770 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerDied","Data":"3a6cd3187438302097e41f8997601ad4ed78776668fbf86e5b5c905b7d06906d"} Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337778 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337787 4995 scope.go:117] "RemoveContainer" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.390396 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.392007 4995 scope.go:117] "RemoveContainer" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.399565 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.410573 4995 scope.go:117] "RemoveContainer" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" Jan 26 23:34:28 crc kubenswrapper[4995]: E0126 23:34:28.413352 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b\": container with ID starting with 92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b not found: ID does not exist" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.413493 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b"} err="failed to get container status \"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b\": rpc error: code = NotFound desc = could not find container \"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b\": container with ID starting with 92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b not found: ID does not exist" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.413585 4995 scope.go:117] "RemoveContainer" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" Jan 26 23:34:28 crc kubenswrapper[4995]: E0126 23:34:28.413949 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f\": container with ID starting with 697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f not found: ID does not exist" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.413985 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f"} err="failed to get container status \"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f\": rpc error: code = NotFound desc = could not find container \"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f\": container with ID starting with 697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f not found: ID does not exist" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.526158 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" path="/var/lib/kubelet/pods/39b9a08a-84f3-4779-bc4c-1cf42869c99d/volumes" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.526729 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" path="/var/lib/kubelet/pods/9b1297fe-4233-44a3-864c-2564bef1017f/volumes" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.626613 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:28 crc kubenswrapper[4995]: W0126 23:34:28.628634 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c8ef00_c407_45a9_bc09_b975263baccf.slice/crio-61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc WatchSource:0}: Error finding container 61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc: Status 404 returned error can't find the container with id 61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc Jan 26 23:34:29 crc kubenswrapper[4995]: I0126 23:34:29.347074 4995 generic.go:334] "Generic (PLEG): container finished" podID="47c8ef00-c407-45a9-bc09-b975263baccf" containerID="25c8d3c2991d69a5a3326fb481b95cc7b754074c8cad3e82a6126d4dff723e1b" exitCode=0 Jan 26 23:34:29 crc kubenswrapper[4995]: I0126 23:34:29.347169 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" event={"ID":"47c8ef00-c407-45a9-bc09-b975263baccf","Type":"ContainerDied","Data":"25c8d3c2991d69a5a3326fb481b95cc7b754074c8cad3e82a6126d4dff723e1b"} Jan 26 23:34:29 crc kubenswrapper[4995]: I0126 23:34:29.347484 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" event={"ID":"47c8ef00-c407-45a9-bc09-b975263baccf","Type":"ContainerStarted","Data":"61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc"} Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167461 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167852 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" containerID="cri-o://e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167944 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" containerID="cri-o://3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167950 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" containerID="cri-o://5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167969 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" containerID="cri-o://7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.234485 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.674514 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.825920 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"47c8ef00-c407-45a9-bc09-b975263baccf\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.826162 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"47c8ef00-c407-45a9-bc09-b975263baccf\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.826755 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47c8ef00-c407-45a9-bc09-b975263baccf" (UID: "47c8ef00-c407-45a9-bc09-b975263baccf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.832614 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf" (OuterVolumeSpecName: "kube-api-access-xq6kf") pod "47c8ef00-c407-45a9-bc09-b975263baccf" (UID: "47c8ef00-c407-45a9-bc09-b975263baccf"). InnerVolumeSpecName "kube-api-access-xq6kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.927934 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.927969 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.151542 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:31 crc kubenswrapper[4995]: E0126 23:34:31.152166 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" containerName="mariadb-account-delete" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152185 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" containerName="mariadb-account-delete" Jan 26 23:34:31 crc kubenswrapper[4995]: E0126 23:34:31.152213 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152222 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" Jan 26 23:34:31 crc kubenswrapper[4995]: E0126 23:34:31.152240 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152250 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152431 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152447 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" containerName="mariadb-account-delete" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152463 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.153844 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.180278 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.233525 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.233818 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.233973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.335488 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.335583 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.335685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.336192 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.336261 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.356151 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.371936 4995 generic.go:334] "Generic (PLEG): container finished" podID="7fd4c263-1050-4645-a224-8e1f758e4495" containerID="d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55" exitCode=0 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.372050 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerDied","Data":"d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.375438 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" event={"ID":"47c8ef00-c407-45a9-bc09-b975263baccf","Type":"ContainerDied","Data":"61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.375473 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.375524 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380203 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3" exitCode=0 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380236 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1" exitCode=2 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380247 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda" exitCode=0 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380267 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380293 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.497043 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.587243 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747562 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747855 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747896 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747955 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.748000 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.748442 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs" (OuterVolumeSpecName: "logs") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.759959 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w" (OuterVolumeSpecName: "kube-api-access-tn67w") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "kube-api-access-tn67w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.780083 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.810179 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data" (OuterVolumeSpecName: "config-data") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.831230 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849388 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849427 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849437 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849446 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849456 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.022288 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.388399 4995 generic.go:334] "Generic (PLEG): container finished" podID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" exitCode=0 Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.388450 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.388699 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerStarted","Data":"adc7d9abe152f60e5a599fd53bf316012dacec6f8f8be6b6961feea31585f3d6"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.389799 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.390407 4995 generic.go:334] "Generic (PLEG): container finished" podID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerID="7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8" exitCode=0 Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.390479 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerDied","Data":"7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.392557 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerDied","Data":"dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.392582 4995 scope.go:117] "RemoveContainer" containerID="d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.392946 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.431237 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.439660 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.498985 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.547069 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" path="/var/lib/kubelet/pods/7fd4c263-1050-4645-a224-8e1f758e4495/volumes" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661708 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661791 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661814 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661859 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661925 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.663473 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs" (OuterVolumeSpecName: "logs") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.667084 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l" (OuterVolumeSpecName: "kube-api-access-jmr4l") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "kube-api-access-jmr4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.684859 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.696172 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.739036 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data" (OuterVolumeSpecName: "config-data") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.745935 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.763526 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764419 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764760 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764846 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764929 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.765001 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.837313 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.843190 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.853150 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.859706 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.866883 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.871641 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.402564 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerStarted","Data":"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389"} Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.404546 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerDied","Data":"cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b"} Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.404603 4995 scope.go:117] "RemoveContainer" containerID="7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8" Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.404756 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.461157 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.467558 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051406 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:34:34 crc kubenswrapper[4995]: E0126 23:34:34.051710 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051721 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" Jan 26 23:34:34 crc kubenswrapper[4995]: E0126 23:34:34.051740 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051746 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051889 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051910 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.052434 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.063982 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.092397 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.093447 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.099156 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.111876 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.192957 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.193034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.193059 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.193117 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295042 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295129 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295152 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295180 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295981 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.296070 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.314815 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.328537 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.406209 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.426120 4995 generic.go:334] "Generic (PLEG): container finished" podID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" exitCode=0 Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.426219 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389"} Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.432814 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.532171 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" path="/var/lib/kubelet/pods/47c8ef00-c407-45a9-bc09-b975263baccf/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.533856 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" path="/var/lib/kubelet/pods/599bdb97-9d21-44b9-9a59-84320b1c4a6e/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.536723 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" path="/var/lib/kubelet/pods/ca2f73d1-0380-4fcf-9fde-35f821426fed/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.537472 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" path="/var/lib/kubelet/pods/fd4eea70-3af8-412b-8a7f-8abda2350f7a/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: W0126 23:34:34.916672 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd487adb0_ddf0_4932_9fad_09dfb2de1d00.slice/crio-aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5 WatchSource:0}: Error finding container aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5: Status 404 returned error can't find the container with id aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5 Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.918922 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.056750 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.446923 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerStarted","Data":"05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.447214 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerStarted","Data":"57e1113b141e5cdc90ca2c6a555835e026eae246413475b254aa598c9e83e8c8"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.456203 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e" exitCode=0 Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.456257 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.458002 4995 generic.go:334] "Generic (PLEG): container finished" podID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerID="cd3358a0ea8ceaa10989cd97ffca9dfefbbb82795be31ea1a44850cfa67b5055" exitCode=0 Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.458042 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tddnh" event={"ID":"d487adb0-ddf0-4932-9fad-09dfb2de1d00","Type":"ContainerDied","Data":"cd3358a0ea8ceaa10989cd97ffca9dfefbbb82795be31ea1a44850cfa67b5055"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.458059 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tddnh" event={"ID":"d487adb0-ddf0-4932-9fad-09dfb2de1d00","Type":"ContainerStarted","Data":"aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.460417 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerStarted","Data":"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.466765 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" podStartSLOduration=1.466749766 podStartE2EDuration="1.466749766s" podCreationTimestamp="2026-01-26 23:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:35.462789367 +0000 UTC m=+1579.627496832" watchObservedRunningTime="2026-01-26 23:34:35.466749766 +0000 UTC m=+1579.631457231" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.490510 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjkxx" podStartSLOduration=2.004854938 podStartE2EDuration="4.490492592s" podCreationTimestamp="2026-01-26 23:34:31 +0000 UTC" firstStartedPulling="2026-01-26 23:34:32.389589427 +0000 UTC m=+1576.554296892" lastFinishedPulling="2026-01-26 23:34:34.875227081 +0000 UTC m=+1579.039934546" observedRunningTime="2026-01-26 23:34:35.486551903 +0000 UTC m=+1579.651259378" watchObservedRunningTime="2026-01-26 23:34:35.490492592 +0000 UTC m=+1579.655200067" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.729456 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839384 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839442 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839496 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839578 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839600 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839680 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839726 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839760 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839818 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839999 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.840200 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.840228 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.845416 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w" (OuterVolumeSpecName: "kube-api-access-d5j7w") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "kube-api-access-d5j7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.846630 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts" (OuterVolumeSpecName: "scripts") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.879470 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.917662 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.934397 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data" (OuterVolumeSpecName: "config-data") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.935633 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944187 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944220 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944233 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944247 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944261 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944272 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.500058 4995 generic.go:334] "Generic (PLEG): container finished" podID="32a18765-f113-401c-850b-e585b2f3bd59" containerID="05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52" exitCode=0 Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.500138 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerDied","Data":"05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52"} Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.503054 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62"} Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.503076 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.503128 4995 scope.go:117] "RemoveContainer" containerID="3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.540640 4995 scope.go:117] "RemoveContainer" containerID="5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.566787 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.580362 4995 scope.go:117] "RemoveContainer" containerID="7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.580523 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.588982 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589383 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589403 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589436 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589447 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589467 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589475 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589488 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589496 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589703 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589722 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589736 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589749 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.591486 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.594029 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.594606 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.594981 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.595408 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.617672 4995 scope.go:117] "RemoveContainer" containerID="e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758590 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758865 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758892 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758969 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.759117 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.759155 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.759286 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860674 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860731 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860813 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860852 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860877 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860899 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860930 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.862794 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.864004 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.864871 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.866570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.872724 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.873492 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.874464 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.881483 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.910850 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.925755 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.062709 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.062884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.063698 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d487adb0-ddf0-4932-9fad-09dfb2de1d00" (UID: "d487adb0-ddf0-4932-9fad-09dfb2de1d00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.071489 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx" (OuterVolumeSpecName: "kube-api-access-qqltx") pod "d487adb0-ddf0-4932-9fad-09dfb2de1d00" (UID: "d487adb0-ddf0-4932-9fad-09dfb2de1d00"). InnerVolumeSpecName "kube-api-access-qqltx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.164434 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.164471 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.390831 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.517869 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tddnh" event={"ID":"d487adb0-ddf0-4932-9fad-09dfb2de1d00","Type":"ContainerDied","Data":"aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5"} Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.517938 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.518061 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.526690 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"a00645c5e1dd09271e74863e0e5c91226b9b85c9d1bb4a0367151708e8674b54"} Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.970316 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.078122 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"32a18765-f113-401c-850b-e585b2f3bd59\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.078237 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"32a18765-f113-401c-850b-e585b2f3bd59\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.082899 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4" (OuterVolumeSpecName: "kube-api-access-vj2q4") pod "32a18765-f113-401c-850b-e585b2f3bd59" (UID: "32a18765-f113-401c-850b-e585b2f3bd59"). InnerVolumeSpecName "kube-api-access-vj2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.083230 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32a18765-f113-401c-850b-e585b2f3bd59" (UID: "32a18765-f113-401c-850b-e585b2f3bd59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.180231 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.180277 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.527831 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" path="/var/lib/kubelet/pods/a3386474-d50c-4dcf-b6b5-9aae87610ee5/volumes" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.538223 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f"} Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.540640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerDied","Data":"57e1113b141e5cdc90ca2c6a555835e026eae246413475b254aa598c9e83e8c8"} Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.540677 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57e1113b141e5cdc90ca2c6a555835e026eae246413475b254aa598c9e83e8c8" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.540739 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.298313 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:34:39 crc kubenswrapper[4995]: E0126 23:34:39.298932 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerName="mariadb-database-create" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.298950 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerName="mariadb-database-create" Jan 26 23:34:39 crc kubenswrapper[4995]: E0126 23:34:39.298969 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a18765-f113-401c-850b-e585b2f3bd59" containerName="mariadb-account-create-update" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.298979 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a18765-f113-401c-850b-e585b2f3bd59" containerName="mariadb-account-create-update" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.299134 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a18765-f113-401c-850b-e585b2f3bd59" containerName="mariadb-account-create-update" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.299159 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerName="mariadb-database-create" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.299655 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.302425 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cc6mk" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.303232 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.311062 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397618 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397721 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397784 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397942 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499515 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499643 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499679 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499704 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.503474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.504273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.504718 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.516020 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.548887 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2"} Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.550607 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502"} Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.613739 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.069412 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.557307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerStarted","Data":"bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b"} Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.557613 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerStarted","Data":"9f41623d1d6a33bc57b262e2ed5931e173521fb5ada02efe68c0474e3e48c050"} Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.572346 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" podStartSLOduration=1.572331337 podStartE2EDuration="1.572331337s" podCreationTimestamp="2026-01-26 23:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:40.56928359 +0000 UTC m=+1584.733991065" watchObservedRunningTime="2026-01-26 23:34:40.572331337 +0000 UTC m=+1584.737038802" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.497764 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.498063 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.544375 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.570310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808"} Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.597624 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.621973284 podStartE2EDuration="5.597607645s" podCreationTimestamp="2026-01-26 23:34:36 +0000 UTC" firstStartedPulling="2026-01-26 23:34:37.395926874 +0000 UTC m=+1581.560634359" lastFinishedPulling="2026-01-26 23:34:40.371561245 +0000 UTC m=+1584.536268720" observedRunningTime="2026-01-26 23:34:41.593873581 +0000 UTC m=+1585.758581096" watchObservedRunningTime="2026-01-26 23:34:41.597607645 +0000 UTC m=+1585.762315100" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.640151 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:42 crc kubenswrapper[4995]: I0126 23:34:42.579061 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:43 crc kubenswrapper[4995]: I0126 23:34:43.589476 4995 generic.go:334] "Generic (PLEG): container finished" podID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerID="bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b" exitCode=0 Jan 26 23:34:43 crc kubenswrapper[4995]: I0126 23:34:43.589555 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerDied","Data":"bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b"} Jan 26 23:34:44 crc kubenswrapper[4995]: I0126 23:34:44.999000 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083788 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083862 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083917 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083959 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.100316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt" (OuterVolumeSpecName: "kube-api-access-wg4wt") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "kube-api-access-wg4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.108210 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.133270 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.140326 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data" (OuterVolumeSpecName: "config-data") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.172650 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.172876 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjkxx" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" containerID="cri-o://f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" gracePeriod=2 Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.186982 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.187021 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.187035 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.187047 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.516226 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.594002 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"88605b61-373f-4ead-b09a-9aeda8950ab0\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.594051 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"88605b61-373f-4ead-b09a-9aeda8950ab0\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.594176 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"88605b61-373f-4ead-b09a-9aeda8950ab0\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.595829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities" (OuterVolumeSpecName: "utilities") pod "88605b61-373f-4ead-b09a-9aeda8950ab0" (UID: "88605b61-373f-4ead-b09a-9aeda8950ab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.598857 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv" (OuterVolumeSpecName: "kube-api-access-d8ncv") pod "88605b61-373f-4ead-b09a-9aeda8950ab0" (UID: "88605b61-373f-4ead-b09a-9aeda8950ab0"). InnerVolumeSpecName "kube-api-access-d8ncv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.611822 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerDied","Data":"9f41623d1d6a33bc57b262e2ed5931e173521fb5ada02efe68c0474e3e48c050"} Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.611856 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f41623d1d6a33bc57b262e2ed5931e173521fb5ada02efe68c0474e3e48c050" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.611918 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620302 4995 generic.go:334] "Generic (PLEG): container finished" podID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" exitCode=0 Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620339 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e"} Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620365 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"adc7d9abe152f60e5a599fd53bf316012dacec6f8f8be6b6961feea31585f3d6"} Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620383 4995 scope.go:117] "RemoveContainer" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620513 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.662072 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88605b61-373f-4ead-b09a-9aeda8950ab0" (UID: "88605b61-373f-4ead-b09a-9aeda8950ab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.690939 4995 scope.go:117] "RemoveContainer" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.696084 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.696157 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.696186 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.714107 4995 scope.go:117] "RemoveContainer" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.729724 4995 scope.go:117] "RemoveContainer" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.730428 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e\": container with ID starting with f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e not found: ID does not exist" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730464 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e"} err="failed to get container status \"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e\": rpc error: code = NotFound desc = could not find container \"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e\": container with ID starting with f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e not found: ID does not exist" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730488 4995 scope.go:117] "RemoveContainer" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.730840 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389\": container with ID starting with 69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389 not found: ID does not exist" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730898 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389"} err="failed to get container status \"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389\": rpc error: code = NotFound desc = could not find container \"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389\": container with ID starting with 69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389 not found: ID does not exist" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730937 4995 scope.go:117] "RemoveContainer" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.731310 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e\": container with ID starting with b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e not found: ID does not exist" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.731344 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e"} err="failed to get container status \"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e\": rpc error: code = NotFound desc = could not find container \"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e\": container with ID starting with b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e not found: ID does not exist" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883168 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883472 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-utilities" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883488 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-utilities" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883512 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883520 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883532 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-content" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883539 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-content" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883556 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerName="watcher-kuttl-db-sync" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883561 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerName="watcher-kuttl-db-sync" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883714 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerName="watcher-kuttl-db-sync" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883734 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.884357 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.888627 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cc6mk" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.889146 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.899338 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.951108 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.952359 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.957209 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.972792 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.981156 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.985528 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.988606 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001641 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001706 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001723 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001760 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001809 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.018785 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.027020 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.035259 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.102860 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103137 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103262 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103331 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103437 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103836 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103935 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103797 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104013 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104226 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104333 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104419 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104524 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104861 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.105040 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.105141 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.105504 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.108239 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.108560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.108773 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.124788 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.206840 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207148 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207169 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207210 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207242 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207285 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207307 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207322 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207350 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207366 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207386 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207399 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.208232 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.208505 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.210685 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.210719 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.211745 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.212615 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.215722 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.216089 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.222738 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.223741 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.224227 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.224657 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.265553 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.287089 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.301792 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.531007 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" path="/var/lib/kubelet/pods/88605b61-373f-4ead-b09a-9aeda8950ab0/volumes" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.764230 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.856819 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.899005 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.643271 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerStarted","Data":"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.643561 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerStarted","Data":"b7111921d0bcb4ece6cd10fa5e18b18895898ed7aa2249d33d860e1754a300c5"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650491 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerStarted","Data":"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650544 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerStarted","Data":"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650559 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerStarted","Data":"a2912924b0f5fcf0004fb3575adbc36625d7116187c79bb884ef553334908c42"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650713 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.652398 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerStarted","Data":"e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.652442 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerStarted","Data":"a4ce0a9663c549496780173c4daf62f761575e14888d812de2623b4acc727c19"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.664681 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.664662602 podStartE2EDuration="2.664662602s" podCreationTimestamp="2026-01-26 23:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:47.661889763 +0000 UTC m=+1591.826597228" watchObservedRunningTime="2026-01-26 23:34:47.664662602 +0000 UTC m=+1591.829370067" Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.691244 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.69122729 podStartE2EDuration="2.69122729s" podCreationTimestamp="2026-01-26 23:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:47.686904421 +0000 UTC m=+1591.851611886" watchObservedRunningTime="2026-01-26 23:34:47.69122729 +0000 UTC m=+1591.855934755" Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.711976 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.71195182 podStartE2EDuration="2.71195182s" podCreationTimestamp="2026-01-26 23:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:47.706058812 +0000 UTC m=+1591.870766277" watchObservedRunningTime="2026-01-26 23:34:47.71195182 +0000 UTC m=+1591.876659285" Jan 26 23:34:50 crc kubenswrapper[4995]: I0126 23:34:50.159792 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:51 crc kubenswrapper[4995]: I0126 23:34:51.266503 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:51 crc kubenswrapper[4995]: I0126 23:34:51.288980 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.266382 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.289608 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.302546 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.312907 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.353069 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.394654 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.754839 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.763716 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.785643 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.806791 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.962173 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.962709 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" containerID="cri-o://6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.964538 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" containerID="cri-o://fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.964662 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" containerID="cri-o://2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.964801 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" containerID="cri-o://c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.975779 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.205:3000/\": EOF" Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775863 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" exitCode=0 Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775907 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" exitCode=2 Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775923 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" exitCode=0 Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775967 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808"} Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.776018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2"} Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.776041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f"} Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.573323 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653844 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653910 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653953 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.654062 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.654168 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.654206 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.656606 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.656737 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.660309 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz" (OuterVolumeSpecName: "kube-api-access-4bgkz") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "kube-api-access-4bgkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.673921 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts" (OuterVolumeSpecName: "scripts") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.689998 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.713880 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.732751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.739350 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data" (OuterVolumeSpecName: "config-data") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756315 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756352 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756370 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756381 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756392 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756402 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756413 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756424 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793739 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" exitCode=0 Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793776 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502"} Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"a00645c5e1dd09271e74863e0e5c91226b9b85c9d1bb4a0367151708e8674b54"} Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793824 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793849 4995 scope.go:117] "RemoveContainer" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.814655 4995 scope.go:117] "RemoveContainer" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.828371 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.837337 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.841439 4995 scope.go:117] "RemoveContainer" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.853718 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.861503 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.861853 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.861945 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862016 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.862129 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862203 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.862281 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862363 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862713 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862800 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862878 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862960 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.865076 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.867389 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.870400 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.870505 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.870677 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.903233 4995 scope.go:117] "RemoveContainer" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.927391 4995 scope.go:117] "RemoveContainer" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.928061 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808\": container with ID starting with fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808 not found: ID does not exist" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928112 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808"} err="failed to get container status \"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808\": rpc error: code = NotFound desc = could not find container \"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808\": container with ID starting with fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808 not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928135 4995 scope.go:117] "RemoveContainer" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.928371 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2\": container with ID starting with c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2 not found: ID does not exist" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928432 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2"} err="failed to get container status \"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2\": rpc error: code = NotFound desc = could not find container \"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2\": container with ID starting with c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2 not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928448 4995 scope.go:117] "RemoveContainer" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.928834 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502\": container with ID starting with 2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502 not found: ID does not exist" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928859 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502"} err="failed to get container status \"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502\": rpc error: code = NotFound desc = could not find container \"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502\": container with ID starting with 2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502 not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928877 4995 scope.go:117] "RemoveContainer" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.929064 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f\": container with ID starting with 6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f not found: ID does not exist" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.929093 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f"} err="failed to get container status \"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f\": rpc error: code = NotFound desc = could not find container \"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f\": container with ID starting with 6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964507 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964560 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964727 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964766 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964832 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964861 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066773 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066844 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066886 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066907 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066943 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066984 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.067008 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.067280 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.067495 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.070506 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.072055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.074211 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.076271 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.082961 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.087045 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.189460 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.685728 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.804523 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"a273f23b5e6b3152076243b8eb373d6a89966d29af4ec92e1e40164b0324f64f"} Jan 26 23:35:02 crc kubenswrapper[4995]: I0126 23:35:02.527396 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" path="/var/lib/kubelet/pods/0e1b3fa8-47bf-4484-98a7-b131e9bed123/volumes" Jan 26 23:35:02 crc kubenswrapper[4995]: I0126 23:35:02.814608 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} Jan 26 23:35:03 crc kubenswrapper[4995]: I0126 23:35:03.826369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} Jan 26 23:35:03 crc kubenswrapper[4995]: I0126 23:35:03.826935 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.088173 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.096892 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.129758 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.130729 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.148212 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.196131 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.208978 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" containerID="cri-o://6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.233760 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.233837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.269538 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.269798 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" containerID="cri-o://e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.284815 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.285050 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" containerID="cri-o://1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.285450 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" containerID="cri-o://52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.334871 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.334929 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.335782 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.360985 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.444693 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.526068 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" path="/var/lib/kubelet/pods/3b5f9d2a-2291-4153-8d71-602f827fd381/volumes" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.838749 4995 generic.go:334] "Generic (PLEG): container finished" podID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" exitCode=143 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.838800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerDied","Data":"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1"} Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.892553 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.548492 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.654906 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.654945 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655050 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655067 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655122 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655136 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.656256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs" (OuterVolumeSpecName: "logs") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.663415 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp" (OuterVolumeSpecName: "kube-api-access-2ljtp") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "kube-api-access-2ljtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.685778 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.702847 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data" (OuterVolumeSpecName: "config-data") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.707191 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.725267 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757167 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757196 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757206 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757215 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757224 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757232 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850242 4995 generic.go:334] "Generic (PLEG): container finished" podID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" exitCode=0 Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerDied","Data":"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850345 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerDied","Data":"a2912924b0f5fcf0004fb3575adbc36625d7116187c79bb884ef553334908c42"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850392 4995 scope.go:117] "RemoveContainer" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.854466 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.855722 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.859646 4995 generic.go:334] "Generic (PLEG): container finished" podID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerID="404080cef7718114d3ef40681ba2896d4b0b7f3fac87f1f21efcf7b7105e0285" exitCode=0 Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.859705 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" event={"ID":"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd","Type":"ContainerDied","Data":"404080cef7718114d3ef40681ba2896d4b0b7f3fac87f1f21efcf7b7105e0285"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.859738 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" event={"ID":"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd","Type":"ContainerStarted","Data":"3e46777b83ef7137b672d4d002a9a57f36054fe8886eaa98beed4e83da6fa179"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.884401 4995 scope.go:117] "RemoveContainer" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.888959 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.625176278 podStartE2EDuration="5.888939893s" podCreationTimestamp="2026-01-26 23:35:00 +0000 UTC" firstStartedPulling="2026-01-26 23:35:01.684135285 +0000 UTC m=+1605.848842760" lastFinishedPulling="2026-01-26 23:35:04.94789891 +0000 UTC m=+1609.112606375" observedRunningTime="2026-01-26 23:35:05.875753251 +0000 UTC m=+1610.040460716" watchObservedRunningTime="2026-01-26 23:35:05.888939893 +0000 UTC m=+1610.053647358" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.898583 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.904949 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.915722 4995 scope.go:117] "RemoveContainer" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" Jan 26 23:35:05 crc kubenswrapper[4995]: E0126 23:35:05.916082 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae\": container with ID starting with 52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae not found: ID does not exist" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.916127 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae"} err="failed to get container status \"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae\": rpc error: code = NotFound desc = could not find container \"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae\": container with ID starting with 52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae not found: ID does not exist" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.916149 4995 scope.go:117] "RemoveContainer" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" Jan 26 23:35:05 crc kubenswrapper[4995]: E0126 23:35:05.916394 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1\": container with ID starting with 1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1 not found: ID does not exist" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.916414 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1"} err="failed to get container status \"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1\": rpc error: code = NotFound desc = could not find container \"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1\": container with ID starting with 1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1 not found: ID does not exist" Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.267667 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.271566 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.275259 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.275346 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:06 crc kubenswrapper[4995]: I0126 23:35:06.526246 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" path="/var/lib/kubelet/pods/f2bc50a4-5dd7-42df-9279-4d07dd760275/volumes" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.235931 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.412387 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.486573 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.486631 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.487015 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" (UID: "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.487198 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.493769 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf" (OuterVolumeSpecName: "kube-api-access-bzpdf") pod "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" (UID: "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd"). InnerVolumeSpecName "kube-api-access-bzpdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.588500 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.895504 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" event={"ID":"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd","Type":"ContainerDied","Data":"3e46777b83ef7137b672d4d002a9a57f36054fe8886eaa98beed4e83da6fa179"} Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.895830 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.895908 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e46777b83ef7137b672d4d002a9a57f36054fe8886eaa98beed4e83da6fa179" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.434340 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.505072 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.505959 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.506144 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.506247 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.506386 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.507475 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs" (OuterVolumeSpecName: "logs") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.514255 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp" (OuterVolumeSpecName: "kube-api-access-ht7vp") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "kube-api-access-ht7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.539135 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.565681 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data" (OuterVolumeSpecName: "config-data") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.587583 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608201 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608229 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608241 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608250 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608259 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.905555 4995 generic.go:334] "Generic (PLEG): container finished" podID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerID="e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1" exitCode=0 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.905737 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerDied","Data":"e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1"} Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912239 4995 generic.go:334] "Generic (PLEG): container finished" podID="ab805559-bee4-4905-95db-b9fd0da719ed" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" exitCode=0 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912383 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerDied","Data":"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c"} Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912479 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912771 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerDied","Data":"b7111921d0bcb4ece6cd10fa5e18b18895898ed7aa2249d33d860e1754a300c5"} Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912904 4995 scope.go:117] "RemoveContainer" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.913901 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" containerID="cri-o://6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.914153 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" containerID="cri-o://9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.914218 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" containerID="cri-o://551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.914267 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" containerID="cri-o://864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.939400 4995 scope.go:117] "RemoveContainer" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" Jan 26 23:35:08 crc kubenswrapper[4995]: E0126 23:35:08.939982 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c\": container with ID starting with 6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c not found: ID does not exist" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.940012 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c"} err="failed to get container status \"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c\": rpc error: code = NotFound desc = could not find container \"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c\": container with ID starting with 6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c not found: ID does not exist" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.963214 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.982190 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.161354 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.175638 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.186969 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.199000 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.207621 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.221174 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.231164 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277119 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277438 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277451 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277466 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277472 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277488 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277495 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277506 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerName="mariadb-account-delete" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277511 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerName="mariadb-account-delete" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277523 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277529 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277662 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277676 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerName="mariadb-account-delete" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277686 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277694 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277703 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.278271 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.291665 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324264 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324338 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324435 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324505 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324535 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324569 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.325333 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs" (OuterVolumeSpecName: "logs") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.353725 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2" (OuterVolumeSpecName: "kube-api-access-6xgz2") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "kube-api-access-6xgz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.361411 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.362447 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.364547 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.367235 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.367411 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.388041 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.407986 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data" (OuterVolumeSpecName: "config-data") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427702 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427758 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427809 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427892 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428001 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428017 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428030 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428043 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428056 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.455491 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.529781 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.529874 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.529974 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.530016 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.530071 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.531319 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.532381 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.559516 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.559524 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.601548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.697070 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.818005 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834464 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834532 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834597 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834641 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834664 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834681 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834699 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834720 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.835262 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.835992 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.840256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78" (OuterVolumeSpecName: "kube-api-access-2fg78") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "kube-api-access-2fg78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.841000 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts" (OuterVolumeSpecName: "scripts") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.869276 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.894001 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.922965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946487 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946520 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946533 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946544 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946556 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946567 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946574 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.951917 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data" (OuterVolumeSpecName: "config-data") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954405 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" exitCode=0 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954429 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" exitCode=2 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954439 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" exitCode=0 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954450 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" exitCode=0 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954500 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954529 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954553 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954564 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"a273f23b5e6b3152076243b8eb373d6a89966d29af4ec92e1e40164b0324f64f"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954582 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954719 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.965448 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerDied","Data":"a4ce0a9663c549496780173c4daf62f761575e14888d812de2623b4acc727c19"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.965522 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.006069 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.019949 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.037654 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.047467 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049413 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049763 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049785 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049804 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049813 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049837 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049843 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049852 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049860 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050018 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050035 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050043 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050055 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.051419 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.054448 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.054655 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.054820 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.060136 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.101716 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.112204 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.125970 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.134935 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.152017 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157270 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157332 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157543 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157586 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157626 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157674 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157697 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.207517 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.241765 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.243430 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243476 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243502 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.243952 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243983 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243997 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.244603 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.244645 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.244674 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.245036 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245065 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245080 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245462 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245486 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245797 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245816 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.247589 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.247636 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248431 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248470 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248946 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248999 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.250689 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.250718 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251037 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251067 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251476 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251498 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251720 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251743 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251982 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252003 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252242 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252262 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252463 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252482 4995 scope.go:117] "RemoveContainer" containerID="e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.258941 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.258986 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259016 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259031 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259060 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259130 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259156 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259221 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259624 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.263232 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.264764 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.264809 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.268805 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.273803 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.280793 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.289298 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.407221 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.467416 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.529998 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021b4697-13c5-4573-b049-d089667af404" path="/var/lib/kubelet/pods/021b4697-13c5-4573-b049-d089667af404/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.530908 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a18765-f113-401c-850b-e585b2f3bd59" path="/var/lib/kubelet/pods/32a18765-f113-401c-850b-e585b2f3bd59/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.531521 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" path="/var/lib/kubelet/pods/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.532880 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" path="/var/lib/kubelet/pods/ab805559-bee4-4905-95db-b9fd0da719ed/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.533535 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" path="/var/lib/kubelet/pods/cacd898a-7524-4989-95ce-0b7a05e318ba/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.534125 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" path="/var/lib/kubelet/pods/d487adb0-ddf0-4932-9fad-09dfb2de1d00/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.893971 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.894299 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.984139 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerID="eaa76726f01faaa0a08761d9ea0a24bad284c08bc58814b2904115408ab201e0" exitCode=0 Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.984284 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-b6hk2" event={"ID":"0e413561-4428-409c-9ca8-2eb61cbe1489","Type":"ContainerDied","Data":"eaa76726f01faaa0a08761d9ea0a24bad284c08bc58814b2904115408ab201e0"} Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.984388 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-b6hk2" event={"ID":"0e413561-4428-409c-9ca8-2eb61cbe1489","Type":"ContainerStarted","Data":"c02f06581a692f6b91fcc1f7bac610f1c9f4543019b9d28ecf55c95987cc2208"} Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.986742 4995 generic.go:334] "Generic (PLEG): container finished" podID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerID="30656b19d1917eb3dd412a07deb00ccc5461cf48e1c2a15363c20a1572d6ee9c" exitCode=0 Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.986812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" event={"ID":"949c118d-bfd2-4707-9091-abc3434a4fb6","Type":"ContainerDied","Data":"30656b19d1917eb3dd412a07deb00ccc5461cf48e1c2a15363c20a1572d6ee9c"} Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.986837 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" event={"ID":"949c118d-bfd2-4707-9091-abc3434a4fb6","Type":"ContainerStarted","Data":"afc478abc4ef1cb2f494320c7726868f0a9be51ea4ba0b187258c18c53e38280"} Jan 26 23:35:11 crc kubenswrapper[4995]: I0126 23:35:11.042050 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.000799 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34"} Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.001235 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"d28b1e3d26673a8db157b6635da5ecd48dd8d6acf8388d4c2dd8e2ae15407e7f"} Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.559370 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.563030 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712078 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"949c118d-bfd2-4707-9091-abc3434a4fb6\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712166 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"0e413561-4428-409c-9ca8-2eb61cbe1489\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712202 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"949c118d-bfd2-4707-9091-abc3434a4fb6\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712474 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "949c118d-bfd2-4707-9091-abc3434a4fb6" (UID: "949c118d-bfd2-4707-9091-abc3434a4fb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712731 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"0e413561-4428-409c-9ca8-2eb61cbe1489\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e413561-4428-409c-9ca8-2eb61cbe1489" (UID: "0e413561-4428-409c-9ca8-2eb61cbe1489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.713156 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.713173 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.715779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h" (OuterVolumeSpecName: "kube-api-access-sx58h") pod "949c118d-bfd2-4707-9091-abc3434a4fb6" (UID: "949c118d-bfd2-4707-9091-abc3434a4fb6"). InnerVolumeSpecName "kube-api-access-sx58h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.717677 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r" (OuterVolumeSpecName: "kube-api-access-plc7r") pod "0e413561-4428-409c-9ca8-2eb61cbe1489" (UID: "0e413561-4428-409c-9ca8-2eb61cbe1489"). InnerVolumeSpecName "kube-api-access-plc7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.814080 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.814130 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.010807 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" event={"ID":"949c118d-bfd2-4707-9091-abc3434a4fb6","Type":"ContainerDied","Data":"afc478abc4ef1cb2f494320c7726868f0a9be51ea4ba0b187258c18c53e38280"} Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.010840 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc478abc4ef1cb2f494320c7726868f0a9be51ea4ba0b187258c18c53e38280" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.010835 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.012879 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-b6hk2" event={"ID":"0e413561-4428-409c-9ca8-2eb61cbe1489","Type":"ContainerDied","Data":"c02f06581a692f6b91fcc1f7bac610f1c9f4543019b9d28ecf55c95987cc2208"} Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.012940 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c02f06581a692f6b91fcc1f7bac610f1c9f4543019b9d28ecf55c95987cc2208" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.012938 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.015374 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2"} Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.026046 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5"} Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.641432 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:35:14 crc kubenswrapper[4995]: E0126 23:35:14.642015 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerName="mariadb-database-create" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642031 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerName="mariadb-database-create" Jan 26 23:35:14 crc kubenswrapper[4995]: E0126 23:35:14.642060 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerName="mariadb-account-create-update" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642068 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerName="mariadb-account-create-update" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642208 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerName="mariadb-database-create" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642231 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerName="mariadb-account-create-update" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642720 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.644450 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-crvrq" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.644937 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.653890 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699593 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699694 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699732 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699771 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801119 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801413 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801537 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801682 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.805713 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.806612 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.808593 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.816055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.966120 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.055220 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8"} Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.056697 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.094695 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.885939571 podStartE2EDuration="5.094678084s" podCreationTimestamp="2026-01-26 23:35:10 +0000 UTC" firstStartedPulling="2026-01-26 23:35:11.037331259 +0000 UTC m=+1615.202038754" lastFinishedPulling="2026-01-26 23:35:14.246069802 +0000 UTC m=+1618.410777267" observedRunningTime="2026-01-26 23:35:15.083557335 +0000 UTC m=+1619.248264810" watchObservedRunningTime="2026-01-26 23:35:15.094678084 +0000 UTC m=+1619.259385549" Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.498963 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:35:16 crc kubenswrapper[4995]: I0126 23:35:16.063932 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerStarted","Data":"19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3"} Jan 26 23:35:16 crc kubenswrapper[4995]: I0126 23:35:16.064224 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerStarted","Data":"b69275d2753e53a55507d36fe3830ac60263304214a446660b94840a30af23f6"} Jan 26 23:35:16 crc kubenswrapper[4995]: I0126 23:35:16.080423 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" podStartSLOduration=2.08040786 podStartE2EDuration="2.08040786s" podCreationTimestamp="2026-01-26 23:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:16.077514057 +0000 UTC m=+1620.242221522" watchObservedRunningTime="2026-01-26 23:35:16.08040786 +0000 UTC m=+1620.245115325" Jan 26 23:35:18 crc kubenswrapper[4995]: I0126 23:35:18.632135 4995 scope.go:117] "RemoveContainer" containerID="2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb" Jan 26 23:35:18 crc kubenswrapper[4995]: I0126 23:35:18.693223 4995 scope.go:117] "RemoveContainer" containerID="38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8" Jan 26 23:35:18 crc kubenswrapper[4995]: I0126 23:35:18.720561 4995 scope.go:117] "RemoveContainer" containerID="558c3ee7288987b85477ab6a956972ed10ae51e028f06cd7ca485975cd8be8ff" Jan 26 23:35:19 crc kubenswrapper[4995]: I0126 23:35:19.091759 4995 generic.go:334] "Generic (PLEG): container finished" podID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerID="19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3" exitCode=0 Jan 26 23:35:19 crc kubenswrapper[4995]: I0126 23:35:19.091839 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerDied","Data":"19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3"} Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.533226 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648741 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648784 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648837 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.654410 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj" (OuterVolumeSpecName: "kube-api-access-2wjbj") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "kube-api-access-2wjbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.659426 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.672376 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.698039 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data" (OuterVolumeSpecName: "config-data") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750808 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750838 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750848 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750857 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.114359 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerDied","Data":"b69275d2753e53a55507d36fe3830ac60263304214a446660b94840a30af23f6"} Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.114398 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69275d2753e53a55507d36fe3830ac60263304214a446660b94840a30af23f6" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.114423 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.373504 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: E0126 23:35:21.373928 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerName="watcher-kuttl-db-sync" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.373954 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerName="watcher-kuttl-db-sync" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.375207 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerName="watcher-kuttl-db-sync" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.376293 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.379566 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-crvrq" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.380010 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.390178 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.429570 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.430972 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.478841 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.514585 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.515623 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.518472 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.520780 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.543662 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.545065 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.548238 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568612 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568655 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568674 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568698 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568723 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568742 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568762 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568779 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568798 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568824 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568850 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568895 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.581471 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670426 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670484 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670533 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670551 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670571 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670593 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670615 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670632 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670666 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670684 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670724 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670739 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670767 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670817 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671081 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671113 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671148 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671194 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671251 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671277 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671306 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671975 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.672652 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.676747 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.677201 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.681074 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.681760 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.685671 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.686581 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.690884 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.691370 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.694303 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.695926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.730003 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.764305 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773241 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773307 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773341 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773368 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773384 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773406 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773423 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773457 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773485 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773511 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773533 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.779905 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.780465 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.781714 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.783596 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.784539 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.784803 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.786935 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.790575 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.791670 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.798870 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.805250 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.841573 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.863599 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.209489 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.311936 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:35:22 crc kubenswrapper[4995]: W0126 23:35:22.322268 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod708f8ff2_4449_41ed_9436_28f9aae04852.slice/crio-3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f WatchSource:0}: Error finding container 3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f: Status 404 returned error can't find the container with id 3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f Jan 26 23:35:22 crc kubenswrapper[4995]: W0126 23:35:22.460722 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45aef819_2cda_443f_82ef_6e54a5be4261.slice/crio-49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad WatchSource:0}: Error finding container 49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad: Status 404 returned error can't find the container with id 49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.466404 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.493057 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:22 crc kubenswrapper[4995]: W0126 23:35:22.495864 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5820f715_2962_4319_b398_fa2a9975c5ea.slice/crio-0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012 WatchSource:0}: Error finding container 0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012: Status 404 returned error can't find the container with id 0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012 Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.157593 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerStarted","Data":"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.157646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerStarted","Data":"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.157660 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerStarted","Data":"3112ece43fb6b0f1c6030da3f87999145998e3975488ad4447ac4aa3ae034350"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.158130 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.159986 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerStarted","Data":"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.160140 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerStarted","Data":"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.160235 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerStarted","Data":"3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.160532 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.162481 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerStarted","Data":"8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.162536 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerStarted","Data":"49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.165829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerStarted","Data":"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.165868 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerStarted","Data":"0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.190250 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.189450485 podStartE2EDuration="2.189450485s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.186133882 +0000 UTC m=+1627.350841387" watchObservedRunningTime="2026-01-26 23:35:23.189450485 +0000 UTC m=+1627.354157980" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.211452 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.211436487 podStartE2EDuration="2.211436487s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.208197556 +0000 UTC m=+1627.372905031" watchObservedRunningTime="2026-01-26 23:35:23.211436487 +0000 UTC m=+1627.376143952" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.261231 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.261212717 podStartE2EDuration="2.261212717s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.239389049 +0000 UTC m=+1627.404096524" watchObservedRunningTime="2026-01-26 23:35:23.261212717 +0000 UTC m=+1627.425920182" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.268563 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.268543431 podStartE2EDuration="2.268543431s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.260342745 +0000 UTC m=+1627.425050210" watchObservedRunningTime="2026-01-26 23:35:23.268543431 +0000 UTC m=+1627.433250906" Jan 26 23:35:25 crc kubenswrapper[4995]: I0126 23:35:25.299930 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:25 crc kubenswrapper[4995]: I0126 23:35:25.655424 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:26 crc kubenswrapper[4995]: I0126 23:35:26.730837 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:26 crc kubenswrapper[4995]: I0126 23:35:26.764926 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:26 crc kubenswrapper[4995]: I0126 23:35:26.842713 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.731169 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.737974 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.766008 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.773136 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.842330 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.865349 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.895643 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.907283 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.282000 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.289493 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.291641 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.305629 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.310721 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377182 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377613 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" containerID="cri-o://f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377646 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" containerID="cri-o://f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377754 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" containerID="cri-o://52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377760 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" containerID="cri-o://b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.388245 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.214:3000/\": EOF" Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309145 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" exitCode=0 Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309173 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" exitCode=2 Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309182 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" exitCode=0 Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309184 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8"} Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309246 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5"} Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34"} Jan 26 23:35:38 crc kubenswrapper[4995]: I0126 23:35:38.954859 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098466 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098532 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098561 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098594 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098649 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098782 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098858 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.099205 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.099487 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.105029 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts" (OuterVolumeSpecName: "scripts") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.105955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx" (OuterVolumeSpecName: "kube-api-access-8sqrx") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "kube-api-access-8sqrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.131613 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.149399 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.197289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data" (OuterVolumeSpecName: "config-data") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202845 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202890 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202909 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202927 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202945 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202961 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202978 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.206641 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.304413 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352891 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" exitCode=0 Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352928 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2"} Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352953 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"d28b1e3d26673a8db157b6635da5ecd48dd8d6acf8388d4c2dd8e2ae15407e7f"} Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352975 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352987 4995 scope.go:117] "RemoveContainer" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.369564 4995 scope.go:117] "RemoveContainer" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.386988 4995 scope.go:117] "RemoveContainer" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.391652 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.405666 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.418335 4995 scope.go:117] "RemoveContainer" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.418704 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422007 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422049 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422071 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422085 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422143 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422156 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422177 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422189 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422495 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422509 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422518 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422530 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.425008 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.427662 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.427996 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.428232 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.431912 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.454758 4995 scope.go:117] "RemoveContainer" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.455272 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8\": container with ID starting with f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8 not found: ID does not exist" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.455311 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8"} err="failed to get container status \"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8\": rpc error: code = NotFound desc = could not find container \"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8\": container with ID starting with f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.455345 4995 scope.go:117] "RemoveContainer" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.456819 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5\": container with ID starting with 52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5 not found: ID does not exist" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.456856 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5"} err="failed to get container status \"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5\": rpc error: code = NotFound desc = could not find container \"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5\": container with ID starting with 52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.456876 4995 scope.go:117] "RemoveContainer" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.457278 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2\": container with ID starting with b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2 not found: ID does not exist" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.457299 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2"} err="failed to get container status \"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2\": rpc error: code = NotFound desc = could not find container \"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2\": container with ID starting with b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.457312 4995 scope.go:117] "RemoveContainer" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.459786 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34\": container with ID starting with f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34 not found: ID does not exist" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.459813 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34"} err="failed to get container status \"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34\": rpc error: code = NotFound desc = could not find container \"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34\": container with ID starting with f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.507859 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.507921 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.507963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508065 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508084 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508119 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508144 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508265 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.609756 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.609821 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.609845 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610187 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610427 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610439 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610804 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.611258 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.615479 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.615602 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.616445 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.616845 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.631775 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.636217 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.755854 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.345688 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.371738 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"2fa9a5a009be0079265a0dcc50a1983900de0b2654a05dca9c424e408f133efc"} Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.530356 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" path="/var/lib/kubelet/pods/f41652c5-25d5-4bb9-bbfc-c460448d0ec6/volumes" Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.893393 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.893477 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:35:41 crc kubenswrapper[4995]: I0126 23:35:41.383230 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201"} Jan 26 23:35:42 crc kubenswrapper[4995]: I0126 23:35:42.391567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66"} Jan 26 23:35:42 crc kubenswrapper[4995]: I0126 23:35:42.391829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b"} Jan 26 23:35:44 crc kubenswrapper[4995]: I0126 23:35:44.416076 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86"} Jan 26 23:35:44 crc kubenswrapper[4995]: I0126 23:35:44.416574 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:44 crc kubenswrapper[4995]: I0126 23:35:44.438090 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.188338234 podStartE2EDuration="5.438075067s" podCreationTimestamp="2026-01-26 23:35:39 +0000 UTC" firstStartedPulling="2026-01-26 23:35:40.362018182 +0000 UTC m=+1644.526725647" lastFinishedPulling="2026-01-26 23:35:43.611755005 +0000 UTC m=+1647.776462480" observedRunningTime="2026-01-26 23:35:44.432862037 +0000 UTC m=+1648.597569502" watchObservedRunningTime="2026-01-26 23:35:44.438075067 +0000 UTC m=+1648.602782532" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.165660 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.167562 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.169624 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.171773 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.182519 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302121 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302200 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302351 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302491 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404221 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404278 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404328 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404444 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.410507 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.410560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.410749 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.422005 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.489181 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.992139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:01 crc kubenswrapper[4995]: W0126 23:36:00.997967 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27579212_06da_4939_bada_9ecd375faf00.slice/crio-ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da WatchSource:0}: Error finding container ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da: Status 404 returned error can't find the container with id ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da Jan 26 23:36:01 crc kubenswrapper[4995]: I0126 23:36:01.600932 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerStarted","Data":"75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2"} Jan 26 23:36:01 crc kubenswrapper[4995]: I0126 23:36:01.601197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerStarted","Data":"ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da"} Jan 26 23:36:01 crc kubenswrapper[4995]: I0126 23:36:01.621417 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" podStartSLOduration=1.621391226 podStartE2EDuration="1.621391226s" podCreationTimestamp="2026-01-26 23:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:36:01.615739414 +0000 UTC m=+1665.780446879" watchObservedRunningTime="2026-01-26 23:36:01.621391226 +0000 UTC m=+1665.786098721" Jan 26 23:36:03 crc kubenswrapper[4995]: I0126 23:36:03.621820 4995 generic.go:334] "Generic (PLEG): container finished" podID="27579212-06da-4939-bada-9ecd375faf00" containerID="75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2" exitCode=0 Jan 26 23:36:03 crc kubenswrapper[4995]: I0126 23:36:03.621862 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerDied","Data":"75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2"} Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.014346 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191056 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191245 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191356 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191536 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.203829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.204678 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29" (OuterVolumeSpecName: "kube-api-access-9jx29") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "kube-api-access-9jx29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.217751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.235911 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data" (OuterVolumeSpecName: "config-data") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294622 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294692 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294719 4995 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294744 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.645893 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerDied","Data":"ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da"} Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.645993 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.646063 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.536835 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.542986 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.555565 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.563610 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.584477 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:07 crc kubenswrapper[4995]: E0126 23:36:07.584796 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27579212-06da-4939-bada-9ecd375faf00" containerName="watcher-db-manage" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.584814 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="27579212-06da-4939-bada-9ecd375faf00" containerName="watcher-db-manage" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.584982 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="27579212-06da-4939-bada-9ecd375faf00" containerName="watcher-db-manage" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.585775 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.603492 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.686304 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.686523 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" containerID="cri-o://594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.686635 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" containerID="cri-o://9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.719466 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.719703 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" containerID="cri-o://9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.719926 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" containerID="cri-o://6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.743553 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.743664 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.775373 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.776004 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" containerID="cri-o://40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.793983 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.804816 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" containerID="cri-o://8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.847947 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.848033 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.848765 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.877771 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.900470 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.433477 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.526788 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27579212-06da-4939-bada-9ecd375faf00" path="/var/lib/kubelet/pods/27579212-06da-4939-bada-9ecd375faf00/volumes" Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.527646 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" path="/var/lib/kubelet/pods/f3e560ee-8e9f-41b9-a407-6879c581e5b5/volumes" Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.678819 4995 generic.go:334] "Generic (PLEG): container finished" podID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" exitCode=143 Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.679053 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerDied","Data":"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.681320 4995 generic.go:334] "Generic (PLEG): container finished" podID="708f8ff2-4449-41ed-9436-28f9aae04852" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" exitCode=143 Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.681352 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerDied","Data":"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.682623 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerStarted","Data":"1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.682649 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerStarted","Data":"cc127f351dfb19f4952ec734c11c273f7c077bf4bf86c7838ab1e91d22fa9c24"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.700410 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" podStartSLOduration=1.700389576 podStartE2EDuration="1.700389576s" podCreationTimestamp="2026-01-26 23:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:36:08.693065473 +0000 UTC m=+1672.857772958" watchObservedRunningTime="2026-01-26 23:36:08.700389576 +0000 UTC m=+1672.865097051" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.069328 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199364 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199512 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199571 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199642 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199684 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199728 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.207264 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs" (OuterVolumeSpecName: "logs") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.224999 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm" (OuterVolumeSpecName: "kube-api-access-d8mtm") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "kube-api-access-d8mtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.244380 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.260143 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.283237 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data" (OuterVolumeSpecName: "config-data") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306300 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306330 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306369 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306381 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306396 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.322448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.353114 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.408369 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509425 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509524 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509692 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509734 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509761 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.510127 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs" (OuterVolumeSpecName: "logs") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.510646 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.524378 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z" (OuterVolumeSpecName: "kube-api-access-q948z") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "kube-api-access-q948z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.543755 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.559872 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.570904 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data" (OuterVolumeSpecName: "config-data") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.600180 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612234 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612276 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612286 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612296 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612307 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.693318 4995 generic.go:334] "Generic (PLEG): container finished" podID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerID="1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183" exitCode=0 Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.693420 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerDied","Data":"1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695330 4995 generic.go:334] "Generic (PLEG): container finished" podID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" exitCode=0 Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695384 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerDied","Data":"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695409 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerDied","Data":"3112ece43fb6b0f1c6030da3f87999145998e3975488ad4447ac4aa3ae034350"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695431 4995 scope.go:117] "RemoveContainer" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695601 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702495 4995 generic.go:334] "Generic (PLEG): container finished" podID="708f8ff2-4449-41ed-9436-28f9aae04852" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" exitCode=0 Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702547 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerDied","Data":"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerDied","Data":"3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702651 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.731238 4995 scope.go:117] "RemoveContainer" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.771011 4995 scope.go:117] "RemoveContainer" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.771523 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229\": container with ID starting with 6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229 not found: ID does not exist" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.771579 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229"} err="failed to get container status \"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229\": rpc error: code = NotFound desc = could not find container \"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229\": container with ID starting with 6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229 not found: ID does not exist" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.771609 4995 scope.go:117] "RemoveContainer" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.772358 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3\": container with ID starting with 9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3 not found: ID does not exist" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.772418 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3"} err="failed to get container status \"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3\": rpc error: code = NotFound desc = could not find container \"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3\": container with ID starting with 9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3 not found: ID does not exist" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.772442 4995 scope.go:117] "RemoveContainer" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.773672 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.790805 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.803854 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.811041 4995 scope.go:117] "RemoveContainer" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.823979 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.831657 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.878737 4995 scope.go:117] "RemoveContainer" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.881810 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017\": container with ID starting with 9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017 not found: ID does not exist" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.881855 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017"} err="failed to get container status \"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017\": rpc error: code = NotFound desc = could not find container \"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017\": container with ID starting with 9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017 not found: ID does not exist" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.881882 4995 scope.go:117] "RemoveContainer" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.882192 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae\": container with ID starting with 594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae not found: ID does not exist" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.882212 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae"} err="failed to get container status \"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae\": rpc error: code = NotFound desc = could not find container \"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae\": container with ID starting with 594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae not found: ID does not exist" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.048197 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.526298 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" path="/var/lib/kubelet/pods/708f8ff2-4449-41ed-9436-28f9aae04852/volumes" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.527086 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" path="/var/lib/kubelet/pods/7ec430d5-4541-494e-88bc-d6cb00ceb6fc/volumes" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.629051 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710501 4995 generic.go:334] "Generic (PLEG): container finished" podID="5820f715-2962-4319-b398-fa2a9975c5ea" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" exitCode=0 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerDied","Data":"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f"} Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710567 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerDied","Data":"0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012"} Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710623 4995 scope.go:117] "RemoveContainer" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713392 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" containerID="cri-o://f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713406 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" containerID="cri-o://b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713463 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" containerID="cri-o://b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713456 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" containerID="cri-o://af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739628 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739888 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739949 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739977 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.740048 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.740373 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs" (OuterVolumeSpecName: "logs") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.744193 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst" (OuterVolumeSpecName: "kube-api-access-vjxst") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "kube-api-access-vjxst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.772257 4995 scope.go:117] "RemoveContainer" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" Jan 26 23:36:10 crc kubenswrapper[4995]: E0126 23:36:10.772700 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f\": container with ID starting with 40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f not found: ID does not exist" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.772724 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f"} err="failed to get container status \"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f\": rpc error: code = NotFound desc = could not find container \"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f\": container with ID starting with 40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f not found: ID does not exist" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.843676 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.843707 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.854228 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.878358 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data" (OuterVolumeSpecName: "config-data") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.897475 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.897527 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.897564 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.898150 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.898194 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" gracePeriod=600 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.930316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.947120 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.947148 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.947165 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:11 crc kubenswrapper[4995]: E0126 23:36:11.054349 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.060201 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.062971 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.175477 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.355441 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"b11fff4a-980d-40c6-a480-3f188cda47bc\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.355593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"b11fff4a-980d-40c6-a480-3f188cda47bc\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.356044 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b11fff4a-980d-40c6-a480-3f188cda47bc" (UID: "b11fff4a-980d-40c6-a480-3f188cda47bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.358632 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg" (OuterVolumeSpecName: "kube-api-access-7t7lg") pod "b11fff4a-980d-40c6-a480-3f188cda47bc" (UID: "b11fff4a-980d-40c6-a480-3f188cda47bc"). InnerVolumeSpecName "kube-api-access-7t7lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.457571 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.457608 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.728122 4995 generic.go:334] "Generic (PLEG): container finished" podID="45aef819-2cda-443f-82ef-6e54a5be4261" containerID="8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.728184 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerDied","Data":"8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.742437 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.742542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.742604 4995 scope.go:117] "RemoveContainer" containerID="76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.743381 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:11 crc kubenswrapper[4995]: E0126 23:36:11.743820 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.751202 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerDied","Data":"cc127f351dfb19f4952ec734c11c273f7c077bf4bf86c7838ab1e91d22fa9c24"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.751251 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc127f351dfb19f4952ec734c11c273f7c077bf4bf86c7838ab1e91d22fa9c24" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.751333 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.767945 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.767951 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768024 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" exitCode=2 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768037 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768054 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768065 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.849135 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970207 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970296 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970348 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970383 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970415 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970452 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.971324 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs" (OuterVolumeSpecName: "logs") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.991283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc" (OuterVolumeSpecName: "kube-api-access-lmvdc") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "kube-api-access-lmvdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.998389 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.005414 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.029629 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data" (OuterVolumeSpecName: "config-data") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072500 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072530 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072541 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072550 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072558 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.075508 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.173596 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.528983 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" path="/var/lib/kubelet/pods/5820f715-2962-4319-b398-fa2a9975c5ea/volumes" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.618334 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.618666 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.625327 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.664145 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.675141 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.690137 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.698147 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.781053 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" exitCode=0 Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.781243 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b"} Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782811 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"2fa9a5a009be0079265a0dcc50a1983900de0b2654a05dca9c424e408f133efc"} Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782346 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783167 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783297 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783357 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782889 4995 scope.go:117] "RemoveContainer" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.781349 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783552 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.784230 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.784564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.789792 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerDied","Data":"49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad"} Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.789821 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt" (OuterVolumeSpecName: "kube-api-access-4mwmt") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "kube-api-access-4mwmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.789884 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.791325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts" (OuterVolumeSpecName: "scripts") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.812226 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.840831 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885478 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885517 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885529 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885539 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885548 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885558 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.896253 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.904925 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data" (OuterVolumeSpecName: "config-data") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.963918 4995 scope.go:117] "RemoveContainer" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.991171 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.991217 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.993521 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:12.995859 4995 scope.go:117] "RemoveContainer" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.000046 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.022674 4995 scope.go:117] "RemoveContainer" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.048254 4995 scope.go:117] "RemoveContainer" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.048857 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86\": container with ID starting with b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86 not found: ID does not exist" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.048905 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86"} err="failed to get container status \"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86\": rpc error: code = NotFound desc = could not find container \"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86\": container with ID starting with b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86 not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.048928 4995 scope.go:117] "RemoveContainer" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.049374 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66\": container with ID starting with af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66 not found: ID does not exist" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049409 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66"} err="failed to get container status \"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66\": rpc error: code = NotFound desc = could not find container \"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66\": container with ID starting with af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66 not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049429 4995 scope.go:117] "RemoveContainer" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.049873 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b\": container with ID starting with b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b not found: ID does not exist" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049913 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b"} err="failed to get container status \"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b\": rpc error: code = NotFound desc = could not find container \"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b\": container with ID starting with b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049932 4995 scope.go:117] "RemoveContainer" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.050241 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201\": container with ID starting with f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201 not found: ID does not exist" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.050269 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201"} err="failed to get container status \"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201\": rpc error: code = NotFound desc = could not find container \"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201\": container with ID starting with f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201 not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.050288 4995 scope.go:117] "RemoveContainer" containerID="8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.130852 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.140142 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.155861 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156163 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156178 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156197 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerName="mariadb-account-delete" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156204 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerName="mariadb-account-delete" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156214 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156220 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156228 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156234 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156241 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156246 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156258 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156263 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156272 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156277 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156291 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156297 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156306 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156312 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156319 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156324 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156336 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156341 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156467 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156480 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156486 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156508 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156517 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156526 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156535 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156545 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156552 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerName="mariadb-account-delete" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156559 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156569 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.157852 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.159886 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.159950 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.163721 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.177292 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.302950 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-config-data\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.302995 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303022 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-log-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303080 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303131 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfnz\" (UniqueName: \"kubernetes.io/projected/584b0f31-d1a1-4e26-b025-0927cfa15d55-kube-api-access-hkfnz\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303177 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-run-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303192 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303230 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-scripts\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405093 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-run-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405152 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405208 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-scripts\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405250 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-config-data\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405274 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405298 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-log-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405325 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405351 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkfnz\" (UniqueName: \"kubernetes.io/projected/584b0f31-d1a1-4e26-b025-0927cfa15d55-kube-api-access-hkfnz\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405991 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-log-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.406144 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-run-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.410838 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.410997 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.412333 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-scripts\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.412753 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-config-data\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.414429 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.429564 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkfnz\" (UniqueName: \"kubernetes.io/projected/584b0f31-d1a1-4e26-b025-0927cfa15d55-kube-api-access-hkfnz\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.471612 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.973116 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.533550 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" path="/var/lib/kubelet/pods/0e413561-4428-409c-9ca8-2eb61cbe1489/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.535530 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" path="/var/lib/kubelet/pods/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.537039 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" path="/var/lib/kubelet/pods/45aef819-2cda-443f-82ef-6e54a5be4261/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.541525 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" path="/var/lib/kubelet/pods/949c118d-bfd2-4707-9091-abc3434a4fb6/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.542602 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" path="/var/lib/kubelet/pods/b11fff4a-980d-40c6-a480-3f188cda47bc/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.834205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"db19b27dee46892f3861870ad75ed19054b80c81f99e147ca9e8a976d05b3b07"} Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.834589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"223a1fed81a419ea9a382b7d6404d088da9819a6b7c0d5ee5c03b8bfebd899a0"} Jan 26 23:36:15 crc kubenswrapper[4995]: I0126 23:36:15.851542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"82ea9f683e39325f563a54b3fae044dcd9b6b60bf5a831fceeecf8cd4bb30bd3"} Jan 26 23:36:16 crc kubenswrapper[4995]: I0126 23:36:16.860668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"638a48b496b8d08324685d8e97f1eee226de3c651e90264bddf06600e16a3ea9"} Jan 26 23:36:17 crc kubenswrapper[4995]: I0126 23:36:17.875584 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"5825b80d35d17bc2af4d98024e768c30c2494b0f5746cebe5e66dc82d99f5951"} Jan 26 23:36:17 crc kubenswrapper[4995]: I0126 23:36:17.877405 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:17 crc kubenswrapper[4995]: I0126 23:36:17.920454 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.882788559 podStartE2EDuration="4.920428826s" podCreationTimestamp="2026-01-26 23:36:13 +0000 UTC" firstStartedPulling="2026-01-26 23:36:13.982376057 +0000 UTC m=+1678.147083522" lastFinishedPulling="2026-01-26 23:36:17.020016324 +0000 UTC m=+1681.184723789" observedRunningTime="2026-01-26 23:36:17.902355912 +0000 UTC m=+1682.067063417" watchObservedRunningTime="2026-01-26 23:36:17.920428826 +0000 UTC m=+1682.085136321" Jan 26 23:36:18 crc kubenswrapper[4995]: I0126 23:36:18.985678 4995 scope.go:117] "RemoveContainer" containerID="628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.024182 4995 scope.go:117] "RemoveContainer" containerID="fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.077378 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.088857 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.109353 4995 scope.go:117] "RemoveContainer" containerID="8fd006c327ce56252705ed20528a00dcfa084ed04bd5e467803791a1f4ae0733" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.150723 4995 scope.go:117] "RemoveContainer" containerID="5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.206783 4995 scope.go:117] "RemoveContainer" containerID="9bcf59f8068a58a5908f7f9f490fcde236bda08e654b64f1d471d1bef1b45cfc" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.245166 4995 scope.go:117] "RemoveContainer" containerID="b04176a0e27de47ec9992ca7aa97e0c6c4c8aae35383f6b313a755fda54d8e47" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.286877 4995 scope.go:117] "RemoveContainer" containerID="54026a5c7938c99685025eb0d6f422b9c6952be4668651d7bb950ada4b54c826" Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.036745 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.049854 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.057940 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.065093 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.536695 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" path="/var/lib/kubelet/pods/2c339608-1d36-448f-b3cd-00252341cf0d/volumes" Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.537951 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" path="/var/lib/kubelet/pods/513f0b17-1707-4c0c-bc81-d7ead6a553c8/volumes" Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.539256 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" path="/var/lib/kubelet/pods/94023397-a2e2-42cb-8469-003bc383aeaa/volumes" Jan 26 23:36:23 crc kubenswrapper[4995]: I0126 23:36:23.516883 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:23 crc kubenswrapper[4995]: E0126 23:36:23.517569 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:38 crc kubenswrapper[4995]: I0126 23:36:38.516944 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:38 crc kubenswrapper[4995]: E0126 23:36:38.517652 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:43 crc kubenswrapper[4995]: I0126 23:36:43.482328 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.661222 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.663174 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.666278 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kpk7x"/"default-dockercfg-bchvm" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.666483 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kpk7x"/"openshift-service-ca.crt" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.666617 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kpk7x"/"kube-root-ca.crt" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.684834 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.693206 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.693305 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.794382 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.794442 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.794882 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.820726 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.982352 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:47 crc kubenswrapper[4995]: I0126 23:36:47.453667 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:36:48 crc kubenswrapper[4995]: I0126 23:36:48.197829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerStarted","Data":"b8f762046c008e7fa5e5eebfcdf898132741a9d313df87b5ba572c6a8bc38898"} Jan 26 23:36:53 crc kubenswrapper[4995]: I0126 23:36:53.517448 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:53 crc kubenswrapper[4995]: E0126 23:36:53.518321 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:54 crc kubenswrapper[4995]: I0126 23:36:54.257372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerStarted","Data":"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025"} Jan 26 23:36:54 crc kubenswrapper[4995]: I0126 23:36:54.257709 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerStarted","Data":"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb"} Jan 26 23:36:54 crc kubenswrapper[4995]: I0126 23:36:54.275312 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" podStartSLOduration=1.995119361 podStartE2EDuration="8.27529124s" podCreationTimestamp="2026-01-26 23:36:46 +0000 UTC" firstStartedPulling="2026-01-26 23:36:47.462851322 +0000 UTC m=+1711.627558787" lastFinishedPulling="2026-01-26 23:36:53.743023191 +0000 UTC m=+1717.907730666" observedRunningTime="2026-01-26 23:36:54.272445238 +0000 UTC m=+1718.437152713" watchObservedRunningTime="2026-01-26 23:36:54.27529124 +0000 UTC m=+1718.439998705" Jan 26 23:37:04 crc kubenswrapper[4995]: I0126 23:37:04.052277 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:37:04 crc kubenswrapper[4995]: I0126 23:37:04.058345 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:37:04 crc kubenswrapper[4995]: I0126 23:37:04.526101 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" path="/var/lib/kubelet/pods/ad6fb114-59e8-443d-acd9-7241b8ee783c/volumes" Jan 26 23:37:05 crc kubenswrapper[4995]: I0126 23:37:05.517526 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:05 crc kubenswrapper[4995]: E0126 23:37:05.518043 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:16 crc kubenswrapper[4995]: I0126 23:37:16.524511 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:16 crc kubenswrapper[4995]: E0126 23:37:16.525552 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.592876 4995 scope.go:117] "RemoveContainer" containerID="a3d0cf0c24bcaec0a584ae1322d81bc2cc97c571dfb1efe06bea1c6a8030ba2d" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.628915 4995 scope.go:117] "RemoveContainer" containerID="7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.697353 4995 scope.go:117] "RemoveContainer" containerID="87d87779d4c3502bc67575e7abc513b3a091bacd50d75b12711b8a101c37d329" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.724285 4995 scope.go:117] "RemoveContainer" containerID="02cef367fb01441bf0b8a9914fe6804f776043582c13fe0f23584fe155ab9938" Jan 26 23:37:27 crc kubenswrapper[4995]: I0126 23:37:27.519507 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:27 crc kubenswrapper[4995]: E0126 23:37:27.520456 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:39 crc kubenswrapper[4995]: I0126 23:37:39.517863 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:39 crc kubenswrapper[4995]: E0126 23:37:39.519071 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:53 crc kubenswrapper[4995]: I0126 23:37:53.516770 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:53 crc kubenswrapper[4995]: E0126 23:37:53.517570 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.412834 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/util/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.618295 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/util/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.621566 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/pull/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.667995 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/pull/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.799466 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/util/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.809747 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/pull/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.851396 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/extract/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.979234 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6987f66698-x2fg8_c5dd6b1a-1515-4ad6-b89e-0c7253a71281/manager/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.076164 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-pzzq9_70dc0d96-2ba1-487e-8ffc-a98725e002c4/manager/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.202828 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/util/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.415823 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/util/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.465506 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/pull/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.468182 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/pull/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.518181 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:08 crc kubenswrapper[4995]: E0126 23:38:08.518368 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.611985 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/pull/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.650364 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/util/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.701745 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/extract/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.882760 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-kgv2f_90ae2b4f-43e9-4a37-abc5-d90e958e540b/manager/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.927717 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-gdvdp_4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.084884 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-954b94f75-7q5kj_e29f1042-97e4-430c-a262-53ab3cca40d9/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.130834 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-r7mgm_bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.350676 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-6gtf9_555394ee-9ad5-417f-9698-646ba1ddc5f2/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.431494 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-n9dc8_3a2f8d86-155b-476b-86c4-fda3eb595fc9/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.618603 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-w2gfg_fd2183e6-a9e4-44b8-861f-9a545aac1c12/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.651446 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-gzjxj_235cf5b2-2094-4345-bf37-edbcb2e5e48f/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.805883 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh_0d39c5fc-e526-46e8-8773-6bf87e938b06/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.869176 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-p47jp_03047106-c820-43c2-bee1-c8b1fb3a0a0c/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.992717 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f54b7d6d4-cf7gh_4e9b965f-6060-43e7-aa1c-b73472075bae/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.044496 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-756f86fc74-7s666_ce22ba19-581c-4f75-9bd6-4de0538779a2/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.189666 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85484lj5_cfbd9d32-25ae-4369-8e16-ce174c0802dc/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.419291 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-z9fdb_ca183057-4337-4dfb-a5ec-e8945fe74cca/registry-server/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.573652 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b6ccbf98-85h8w_03478ac9-bd6b-4726-86b4-cd29045b6dc0/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.816267 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-z899w_1b364747-4f4c-4431-becf-0f2b30bc9d20/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.980341 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-5zhml_931ac40b-6695-41c7-9d8f-c8eefca6e587/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.035217 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dk2dl_a0641fd3-88a7-4fb2-93f9-ffce84aadef2/operator/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.136857 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-b4kzb_aba99191-8a3a-47dc-8dca-136de682a567/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.339978 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-kjmpf_b60b13f0-97c0-42b9-85fd-2a51218c9ac1/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.390420 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-bmdgt_fd5d672d-1c27-4782-bbf3-c6d936a8c9bb/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.647968 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-k8w76_fea9da97-72c6-4b3a-a479-1566d93b3a22/registry-server/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.735687 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-69796cd4f7-2jmll_001f4541-5731-4423-9cf7-f2c339b975b1/manager/0.log" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.822414 4995 scope.go:117] "RemoveContainer" containerID="92cc26c82a9b23a9721c60030809c14c060714d70c702de958b8d81f8d16479b" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.845129 4995 scope.go:117] "RemoveContainer" containerID="42bdbf79e7939fb4f6bd922600909eb049e24579c79123df69d4d9b5938f3988" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.879154 4995 scope.go:117] "RemoveContainer" containerID="0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.935574 4995 scope.go:117] "RemoveContainer" containerID="9c92253ce611dea0df9e21427e5984e7db9bccf73045bb24769fa3dbad187a39" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.954889 4995 scope.go:117] "RemoveContainer" containerID="6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc" Jan 26 23:38:20 crc kubenswrapper[4995]: I0126 23:38:20.517012 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:20 crc kubenswrapper[4995]: E0126 23:38:20.517350 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:33 crc kubenswrapper[4995]: I0126 23:38:33.534554 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s4cw2_8e46628e-0c8d-4128-b57c-ad324ff9f9bc/control-plane-machine-set-operator/0.log" Jan 26 23:38:33 crc kubenswrapper[4995]: I0126 23:38:33.731923 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-klb9g_49ad869c-a391-4d0b-99fa-74e9d7ef4e87/kube-rbac-proxy/0.log" Jan 26 23:38:33 crc kubenswrapper[4995]: I0126 23:38:33.789695 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-klb9g_49ad869c-a391-4d0b-99fa-74e9d7ef4e87/machine-api-operator/0.log" Jan 26 23:38:35 crc kubenswrapper[4995]: I0126 23:38:35.516888 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:35 crc kubenswrapper[4995]: E0126 23:38:35.517156 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:47 crc kubenswrapper[4995]: I0126 23:38:47.517779 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:47 crc kubenswrapper[4995]: E0126 23:38:47.518597 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:48 crc kubenswrapper[4995]: I0126 23:38:48.883179 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4g78v_0ea05f4b-1373-4e08-9d78-e214b84cdc79/cert-manager-controller/0.log" Jan 26 23:38:49 crc kubenswrapper[4995]: I0126 23:38:49.104011 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-hjbt4_10b23efd-9250-469e-8bce-4f31c05d1470/cert-manager-cainjector/0.log" Jan 26 23:38:49 crc kubenswrapper[4995]: I0126 23:38:49.209167 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-g88s9_5cf25cae-f1af-44e4-a613-be45044cf998/cert-manager-webhook/0.log" Jan 26 23:39:02 crc kubenswrapper[4995]: I0126 23:39:02.517973 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:02 crc kubenswrapper[4995]: E0126 23:39:02.518961 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.078843 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-8rf6d_fa9c3198-27d3-4733-8c9c-ccc6f0168f0d/nmstate-console-plugin/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.261346 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4nqd8_8bd5c3be-b641-437a-9aad-bcd9a7dd2c56/nmstate-handler/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.285421 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-75scl_49297381-c6bb-4ede-9f80-38ee237f7a3e/kube-rbac-proxy/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.380889 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-75scl_49297381-c6bb-4ede-9f80-38ee237f7a3e/nmstate-metrics/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.465656 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fnp66_1f224cbd-cdf6-474c-bcc6-a37358dcd4f5/nmstate-operator/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.592023 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jkj8f_4adb027e-2869-4cbc-bdb7-63ae41659c28/nmstate-webhook/0.log" Jan 26 23:39:15 crc kubenswrapper[4995]: I0126 23:39:15.517943 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:15 crc kubenswrapper[4995]: E0126 23:39:15.518811 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.062590 4995 scope.go:117] "RemoveContainer" containerID="4f43eaafefb61a73772d9d42e692be3b8d70484a9a76ac96db06e9b550ed122a" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.106893 4995 scope.go:117] "RemoveContainer" containerID="b4b16b6f1cc961085f1980b33bb732c8fc0fbcf31eda7643a2f07d72636e35f6" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.138326 4995 scope.go:117] "RemoveContainer" containerID="277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.166787 4995 scope.go:117] "RemoveContainer" containerID="fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.226888 4995 scope.go:117] "RemoveContainer" containerID="7e3ee0bb83f474f59b73fb0e9420f6ea26d6576fd1f9c21251e039a52f0471bc" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.253829 4995 scope.go:117] "RemoveContainer" containerID="1866d568d45be33fe5efec6245bd56a7ca5c85d09dddb97e98e3df586623483f" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.286117 4995 scope.go:117] "RemoveContainer" containerID="99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.325285 4995 scope.go:117] "RemoveContainer" containerID="cf3bdba0bcbd9d81e57b55b762961560e4562c68a0aaacec99cefb4e736c2028" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.082883 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zfmp4_a1c71758-f818-4fd6-a985-4aa33488e96c/prometheus-operator/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.295664 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_684ae2c3-240e-4b73-9aaa-391ad824f47d/prometheus-operator-admission-webhook/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.359908 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de/prometheus-operator-admission-webhook/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.514746 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-g4lwc_549a554b-0ef6-4d8b-b2cf-4445474572d2/operator/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.631959 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-k62mg_403406f0-ed75-4c4d-878b-a21885f105d2/observability-ui-dashboards/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.703420 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ngw26_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8/perses-operator/0.log" Jan 26 23:39:26 crc kubenswrapper[4995]: I0126 23:39:26.522777 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:26 crc kubenswrapper[4995]: E0126 23:39:26.523626 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.517452 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:37 crc kubenswrapper[4995]: E0126 23:39:37.517998 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.752800 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.754297 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.773057 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.834575 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.834641 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.834673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936242 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936303 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936331 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936806 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.937205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.957464 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.076669 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.369554 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.667448 4995 generic.go:334] "Generic (PLEG): container finished" podID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" exitCode=0 Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.667493 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b"} Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.667519 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerStarted","Data":"835bfde9b7a58b9c1c6bcda08bf7904147551057930e3a6bacaeb397a7228637"} Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.669322 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.531057 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hp8cv_fd8ee636-b6e8-4caf-bf47-8356cf3974a5/kube-rbac-proxy/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.662375 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hp8cv_fd8ee636-b6e8-4caf-bf47-8356cf3974a5/controller/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.676723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerStarted","Data":"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488"} Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.766248 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.921344 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.959122 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.959942 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.985312 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.299223 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.349558 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.500291 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.530273 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.685533 4995 generic.go:334] "Generic (PLEG): container finished" podID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" exitCode=0 Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.685572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488"} Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.788903 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.848739 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.916897 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.949150 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/controller/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.153060 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/frr-metrics/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.163559 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/kube-rbac-proxy-frr/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.188223 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/kube-rbac-proxy/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.394201 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/reloader/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.438261 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bqkf9_d71dd2bc-e8c9-4a37-9096-35a1f19333f8/frr-k8s-webhook-server/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.648397 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9666f9f76-p9s9z_b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9/manager/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.708456 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerStarted","Data":"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99"} Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.731525 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzgkt" podStartSLOduration=2.276751068 podStartE2EDuration="4.731503774s" podCreationTimestamp="2026-01-26 23:39:37 +0000 UTC" firstStartedPulling="2026-01-26 23:39:38.669131128 +0000 UTC m=+1882.833838583" lastFinishedPulling="2026-01-26 23:39:41.123883824 +0000 UTC m=+1885.288591289" observedRunningTime="2026-01-26 23:39:41.728644752 +0000 UTC m=+1885.893352217" watchObservedRunningTime="2026-01-26 23:39:41.731503774 +0000 UTC m=+1885.896211239" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.827927 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/frr/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.938417 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79fc76bd5c-vctw9_191e8757-940a-4e3e-a884-f5935f9f8201/webhook-server/0.log" Jan 26 23:39:42 crc kubenswrapper[4995]: I0126 23:39:42.037851 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jlkxq_4768de9d-be12-4b0b-9bd1-03f127a1a557/kube-rbac-proxy/0.log" Jan 26 23:39:42 crc kubenswrapper[4995]: I0126 23:39:42.252542 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jlkxq_4768de9d-be12-4b0b-9bd1-03f127a1a557/speaker/0.log" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.077741 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.078173 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.130110 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.826608 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:51 crc kubenswrapper[4995]: I0126 23:39:51.517251 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:51 crc kubenswrapper[4995]: E0126 23:39:51.517768 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:51 crc kubenswrapper[4995]: I0126 23:39:51.746637 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:51 crc kubenswrapper[4995]: I0126 23:39:51.747034 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzgkt" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" containerID="cri-o://d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" gracePeriod=2 Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.210332 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.393151 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"9f7c2892-e695-4b52-87c6-e32d1495bf87\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.393213 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"9f7c2892-e695-4b52-87c6-e32d1495bf87\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.393313 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"9f7c2892-e695-4b52-87c6-e32d1495bf87\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.394283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities" (OuterVolumeSpecName: "utilities") pod "9f7c2892-e695-4b52-87c6-e32d1495bf87" (UID: "9f7c2892-e695-4b52-87c6-e32d1495bf87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.401092 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv" (OuterVolumeSpecName: "kube-api-access-n8dxv") pod "9f7c2892-e695-4b52-87c6-e32d1495bf87" (UID: "9f7c2892-e695-4b52-87c6-e32d1495bf87"). InnerVolumeSpecName "kube-api-access-n8dxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.452973 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f7c2892-e695-4b52-87c6-e32d1495bf87" (UID: "9f7c2892-e695-4b52-87c6-e32d1495bf87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.495646 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") on node \"crc\" DevicePath \"\"" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.495692 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.495701 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820603 4995 generic.go:334] "Generic (PLEG): container finished" podID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" exitCode=0 Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820692 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99"} Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820742 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820801 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"835bfde9b7a58b9c1c6bcda08bf7904147551057930e3a6bacaeb397a7228637"} Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820838 4995 scope.go:117] "RemoveContainer" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.846528 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.854178 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.857073 4995 scope.go:117] "RemoveContainer" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.880258 4995 scope.go:117] "RemoveContainer" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919011 4995 scope.go:117] "RemoveContainer" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" Jan 26 23:39:52 crc kubenswrapper[4995]: E0126 23:39:52.919482 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99\": container with ID starting with d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99 not found: ID does not exist" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919518 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99"} err="failed to get container status \"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99\": rpc error: code = NotFound desc = could not find container \"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99\": container with ID starting with d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99 not found: ID does not exist" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919544 4995 scope.go:117] "RemoveContainer" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" Jan 26 23:39:52 crc kubenswrapper[4995]: E0126 23:39:52.919879 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488\": container with ID starting with 7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488 not found: ID does not exist" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919899 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488"} err="failed to get container status \"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488\": rpc error: code = NotFound desc = could not find container \"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488\": container with ID starting with 7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488 not found: ID does not exist" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919915 4995 scope.go:117] "RemoveContainer" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" Jan 26 23:39:52 crc kubenswrapper[4995]: E0126 23:39:52.920228 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b\": container with ID starting with d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b not found: ID does not exist" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.920259 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b"} err="failed to get container status \"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b\": rpc error: code = NotFound desc = could not find container \"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b\": container with ID starting with d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b not found: ID does not exist" Jan 26 23:39:54 crc kubenswrapper[4995]: I0126 23:39:54.533813 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" path="/var/lib/kubelet/pods/9f7c2892-e695-4b52-87c6-e32d1495bf87/volumes" Jan 26 23:40:02 crc kubenswrapper[4995]: I0126 23:40:02.518604 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:02 crc kubenswrapper[4995]: E0126 23:40:02.519523 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.261991 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/init-config-reloader/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.472287 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/init-config-reloader/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.558296 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/alertmanager/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.652295 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/config-reloader/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.788090 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/ceilometer-notification-agent/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.836289 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/ceilometer-central-agent/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.894396 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/sg-core/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.895124 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/proxy-httpd/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.103212 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-bootstrap-sf9jb_c5595470-f70f-4bc9-9012-b939a6b2fc0f/keystone-bootstrap/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.103730 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-984bfcd89-8d4rw_257ee213-d2fa-4d94-9b26-0c62b5411e44/keystone-api/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.281622 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_86cef714-2c2e-4825-bab7-c653df90a3c2/kube-state-metrics/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.665513 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_5da7bc3d-c0c7-4935-ba58-c64da8c943b0/mysql-bootstrap/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.901657 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_5da7bc3d-c0c7-4935-ba58-c64da8c943b0/galera/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.939281 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_5da7bc3d-c0c7-4935-ba58-c64da8c943b0/mysql-bootstrap/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.089602 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_f27553d1-06f5-4e72-9d14-714d48fbd854/openstackclient/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.260493 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/init-config-reloader/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.452082 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/init-config-reloader/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.496474 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/config-reloader/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.501269 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/prometheus/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.681713 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/thanos-sidecar/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.686494 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_54ccebac-5075-4c00-a1e9-ebb66b43876e/setup-container/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.955570 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_54ccebac-5075-4c00-a1e9-ebb66b43876e/setup-container/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.046282 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_54ccebac-5075-4c00-a1e9-ebb66b43876e/rabbitmq/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.172064 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_4b909799-2071-4d68-ab55-d29f6e224bf2/setup-container/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.439583 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_4b909799-2071-4d68-ab55-d29f6e224bf2/setup-container/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.557247 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_4b909799-2071-4d68-ab55-d29f6e224bf2/rabbitmq/0.log" Jan 26 23:40:14 crc kubenswrapper[4995]: I0126 23:40:14.518222 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:14 crc kubenswrapper[4995]: E0126 23:40:14.518917 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:16 crc kubenswrapper[4995]: I0126 23:40:16.151994 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_9e495843-c3b4-4d2e-9c40-b11f0d95b5f9/memcached/0.log" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.468802 4995 scope.go:117] "RemoveContainer" containerID="2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.535833 4995 scope.go:117] "RemoveContainer" containerID="4af14df6baf5e2d7f5d921b037ff739c3922a94531b7d54b66151b9b3794fdee" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.560049 4995 scope.go:117] "RemoveContainer" containerID="145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.597850 4995 scope.go:117] "RemoveContainer" containerID="bbc420fd12fe1d211845fe7f68211386fea3f13c2e6223073fc5536f18ea16a2" Jan 26 23:40:25 crc kubenswrapper[4995]: I0126 23:40:25.517275 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:25 crc kubenswrapper[4995]: E0126 23:40:25.518211 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:27 crc kubenswrapper[4995]: I0126 23:40:27.792532 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/util/0.log" Jan 26 23:40:27 crc kubenswrapper[4995]: I0126 23:40:27.945230 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.036945 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.039905 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.173010 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.197772 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.198303 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/extract/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.368732 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.505666 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.506917 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.545557 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.727073 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.747116 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.799191 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/extract/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.905032 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.100192 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.143349 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.150543 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.324559 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.359979 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/extract/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.362907 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.507467 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.724658 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.729302 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.735776 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.964465 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.979093 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/extract/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.999240 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/pull/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.184126 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.363637 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.370711 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-content/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.404292 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-content/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.596868 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-content/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.625025 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.874293 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.883282 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/registry-server/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.019701 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.112892 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.112893 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.236814 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.305223 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.347419 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vsjb7_d781053b-fcf3-44a7-812a-8af6c2c1ab07/marketplace-operator/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.739994 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/registry-server/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.829555 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.955264 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.967327 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.972775 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.128717 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.133832 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.216323 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/registry-server/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.220802 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.349530 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.384398 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.400652 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.590051 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.615562 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.833294 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/registry-server/0.log" Jan 26 23:40:33 crc kubenswrapper[4995]: I0126 23:40:33.038447 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:40:33 crc kubenswrapper[4995]: I0126 23:40:33.043647 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:40:34 crc kubenswrapper[4995]: I0126 23:40:34.529476 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" path="/var/lib/kubelet/pods/c5595470-f70f-4bc9-9012-b939a6b2fc0f/volumes" Jan 26 23:40:37 crc kubenswrapper[4995]: I0126 23:40:37.516860 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:37 crc kubenswrapper[4995]: E0126 23:40:37.517401 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.326475 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zfmp4_a1c71758-f818-4fd6-a985-4aa33488e96c/prometheus-operator/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.360053 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_684ae2c3-240e-4b73-9aaa-391ad824f47d/prometheus-operator-admission-webhook/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.363535 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de/prometheus-operator-admission-webhook/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.568590 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ngw26_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8/perses-operator/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.610234 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-g4lwc_549a554b-0ef6-4d8b-b2cf-4445474572d2/operator/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.615214 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-k62mg_403406f0-ed75-4c4d-878b-a21885f105d2/observability-ui-dashboards/0.log" Jan 26 23:40:51 crc kubenswrapper[4995]: I0126 23:40:51.517456 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:51 crc kubenswrapper[4995]: E0126 23:40:51.518489 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:41:02 crc kubenswrapper[4995]: I0126 23:41:02.517681 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:41:02 crc kubenswrapper[4995]: E0126 23:41:02.518495 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:41:13 crc kubenswrapper[4995]: I0126 23:41:13.517668 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:41:14 crc kubenswrapper[4995]: I0126 23:41:14.575283 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061"} Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.748173 4995 scope.go:117] "RemoveContainer" containerID="404080cef7718114d3ef40681ba2896d4b0b7f3fac87f1f21efcf7b7105e0285" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.777728 4995 scope.go:117] "RemoveContainer" containerID="cd3358a0ea8ceaa10989cd97ffca9dfefbbb82795be31ea1a44850cfa67b5055" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.830689 4995 scope.go:117] "RemoveContainer" containerID="eaa76726f01faaa0a08761d9ea0a24bad284c08bc58814b2904115408ab201e0" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.873336 4995 scope.go:117] "RemoveContainer" containerID="05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.926218 4995 scope.go:117] "RemoveContainer" containerID="25c8d3c2991d69a5a3326fb481b95cc7b754074c8cad3e82a6126d4dff723e1b" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.955650 4995 scope.go:117] "RemoveContainer" containerID="30656b19d1917eb3dd412a07deb00ccc5461cf48e1c2a15363c20a1572d6ee9c" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.988348 4995 scope.go:117] "RemoveContainer" containerID="bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b" Jan 26 23:41:21 crc kubenswrapper[4995]: I0126 23:41:21.020851 4995 scope.go:117] "RemoveContainer" containerID="19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3" Jan 26 23:41:21 crc kubenswrapper[4995]: I0126 23:41:21.051278 4995 scope.go:117] "RemoveContainer" containerID="27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.143354 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:41:47 crc kubenswrapper[4995]: E0126 23:41:47.144343 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-utilities" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144363 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-utilities" Jan 26 23:41:47 crc kubenswrapper[4995]: E0126 23:41:47.144380 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144391 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" Jan 26 23:41:47 crc kubenswrapper[4995]: E0126 23:41:47.144425 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-content" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144436 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-content" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144658 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.146540 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.167139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.203768 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.203956 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.204707 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.306452 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.306544 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.306594 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.307279 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.307284 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.333059 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.477506 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.971903 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:41:48 crc kubenswrapper[4995]: I0126 23:41:48.910762 4995 generic.go:334] "Generic (PLEG): container finished" podID="c3de82ef-2dae-4204-9b32-878af19e4055" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" exitCode=0 Jan 26 23:41:48 crc kubenswrapper[4995]: I0126 23:41:48.910832 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3"} Jan 26 23:41:48 crc kubenswrapper[4995]: I0126 23:41:48.910872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerStarted","Data":"f2a57a01c6fd27a753d1502c47deb5d11a61ac11fc497d91ee3965247570c99b"} Jan 26 23:41:49 crc kubenswrapper[4995]: I0126 23:41:49.937001 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerStarted","Data":"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb"} Jan 26 23:41:50 crc kubenswrapper[4995]: I0126 23:41:50.962446 4995 generic.go:334] "Generic (PLEG): container finished" podID="c3de82ef-2dae-4204-9b32-878af19e4055" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" exitCode=0 Jan 26 23:41:50 crc kubenswrapper[4995]: I0126 23:41:50.962496 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb"} Jan 26 23:41:51 crc kubenswrapper[4995]: I0126 23:41:51.972918 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerStarted","Data":"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b"} Jan 26 23:41:52 crc kubenswrapper[4995]: I0126 23:41:52.000257 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gljh7" podStartSLOduration=2.555375148 podStartE2EDuration="5.000237833s" podCreationTimestamp="2026-01-26 23:41:47 +0000 UTC" firstStartedPulling="2026-01-26 23:41:48.913316671 +0000 UTC m=+2013.078024136" lastFinishedPulling="2026-01-26 23:41:51.358179366 +0000 UTC m=+2015.522886821" observedRunningTime="2026-01-26 23:41:51.995937206 +0000 UTC m=+2016.160644711" watchObservedRunningTime="2026-01-26 23:41:52.000237833 +0000 UTC m=+2016.164945308" Jan 26 23:41:54 crc kubenswrapper[4995]: I0126 23:41:54.997138 4995 generic.go:334] "Generic (PLEG): container finished" podID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" exitCode=0 Jan 26 23:41:54 crc kubenswrapper[4995]: I0126 23:41:54.997200 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerDied","Data":"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb"} Jan 26 23:41:54 crc kubenswrapper[4995]: I0126 23:41:54.998004 4995 scope.go:117] "RemoveContainer" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:41:55 crc kubenswrapper[4995]: I0126 23:41:55.509802 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpk7x_must-gather-7f9z4_6d19bd6c-1672-4d8d-af69-d1cda742bf83/gather/0.log" Jan 26 23:41:57 crc kubenswrapper[4995]: I0126 23:41:57.477806 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:57 crc kubenswrapper[4995]: I0126 23:41:57.477859 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:58 crc kubenswrapper[4995]: I0126 23:41:58.537853 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gljh7" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" probeResult="failure" output=< Jan 26 23:41:58 crc kubenswrapper[4995]: timeout: failed to connect service ":50051" within 1s Jan 26 23:41:58 crc kubenswrapper[4995]: > Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.208771 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.209533 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" containerID="cri-o://dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" gracePeriod=2 Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.218913 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.633968 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpk7x_must-gather-7f9z4_6d19bd6c-1672-4d8d-af69-d1cda742bf83/copy/0.log" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.634663 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.733667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.733738 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.739416 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr" (OuterVolumeSpecName: "kube-api-access-jqtdr") pod "6d19bd6c-1672-4d8d-af69-d1cda742bf83" (UID: "6d19bd6c-1672-4d8d-af69-d1cda742bf83"). InnerVolumeSpecName "kube-api-access-jqtdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.835836 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.858554 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6d19bd6c-1672-4d8d-af69-d1cda742bf83" (UID: "6d19bd6c-1672-4d8d-af69-d1cda742bf83"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.937627 4995 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.079782 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpk7x_must-gather-7f9z4_6d19bd6c-1672-4d8d-af69-d1cda742bf83/copy/0.log" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.080139 4995 generic.go:334] "Generic (PLEG): container finished" podID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" exitCode=143 Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.080198 4995 scope.go:117] "RemoveContainer" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.080345 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.111741 4995 scope.go:117] "RemoveContainer" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184024 4995 scope.go:117] "RemoveContainer" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" Jan 26 23:42:05 crc kubenswrapper[4995]: E0126 23:42:05.184468 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025\": container with ID starting with dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025 not found: ID does not exist" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184509 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025"} err="failed to get container status \"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025\": rpc error: code = NotFound desc = could not find container \"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025\": container with ID starting with dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025 not found: ID does not exist" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184535 4995 scope.go:117] "RemoveContainer" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:42:05 crc kubenswrapper[4995]: E0126 23:42:05.184803 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb\": container with ID starting with 68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb not found: ID does not exist" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184835 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb"} err="failed to get container status \"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb\": rpc error: code = NotFound desc = could not find container \"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb\": container with ID starting with 68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb not found: ID does not exist" Jan 26 23:42:06 crc kubenswrapper[4995]: I0126 23:42:06.528053 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" path="/var/lib/kubelet/pods/6d19bd6c-1672-4d8d-af69-d1cda742bf83/volumes" Jan 26 23:42:07 crc kubenswrapper[4995]: I0126 23:42:07.536551 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:07 crc kubenswrapper[4995]: I0126 23:42:07.586786 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.123330 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.124149 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gljh7" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" containerID="cri-o://16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" gracePeriod=2 Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.634609 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.785177 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"c3de82ef-2dae-4204-9b32-878af19e4055\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.785395 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"c3de82ef-2dae-4204-9b32-878af19e4055\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.785461 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"c3de82ef-2dae-4204-9b32-878af19e4055\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.786229 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities" (OuterVolumeSpecName: "utilities") pod "c3de82ef-2dae-4204-9b32-878af19e4055" (UID: "c3de82ef-2dae-4204-9b32-878af19e4055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.801346 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6" (OuterVolumeSpecName: "kube-api-access-dxrd6") pod "c3de82ef-2dae-4204-9b32-878af19e4055" (UID: "c3de82ef-2dae-4204-9b32-878af19e4055"). InnerVolumeSpecName "kube-api-access-dxrd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.887362 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.887885 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.917701 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3de82ef-2dae-4204-9b32-878af19e4055" (UID: "c3de82ef-2dae-4204-9b32-878af19e4055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.989786 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152153 4995 generic.go:334] "Generic (PLEG): container finished" podID="c3de82ef-2dae-4204-9b32-878af19e4055" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" exitCode=0 Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152216 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b"} Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152255 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"f2a57a01c6fd27a753d1502c47deb5d11a61ac11fc497d91ee3965247570c99b"} Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152285 4995 scope.go:117] "RemoveContainer" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152508 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.198672 4995 scope.go:117] "RemoveContainer" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.250853 4995 scope.go:117] "RemoveContainer" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.258294 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.275462 4995 scope.go:117] "RemoveContainer" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" Jan 26 23:42:12 crc kubenswrapper[4995]: E0126 23:42:12.275952 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b\": container with ID starting with 16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b not found: ID does not exist" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.276220 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b"} err="failed to get container status \"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b\": rpc error: code = NotFound desc = could not find container \"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b\": container with ID starting with 16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b not found: ID does not exist" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.276407 4995 scope.go:117] "RemoveContainer" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" Jan 26 23:42:12 crc kubenswrapper[4995]: E0126 23:42:12.277754 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb\": container with ID starting with 59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb not found: ID does not exist" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.278050 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb"} err="failed to get container status \"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb\": rpc error: code = NotFound desc = could not find container \"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb\": container with ID starting with 59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb not found: ID does not exist" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.278361 4995 scope.go:117] "RemoveContainer" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" Jan 26 23:42:12 crc kubenswrapper[4995]: E0126 23:42:12.278986 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3\": container with ID starting with 3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3 not found: ID does not exist" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.279218 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3"} err="failed to get container status \"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3\": rpc error: code = NotFound desc = could not find container \"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3\": container with ID starting with 3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3 not found: ID does not exist" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.282144 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.529056 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" path="/var/lib/kubelet/pods/c3de82ef-2dae-4204-9b32-878af19e4055/volumes" Jan 26 23:42:21 crc kubenswrapper[4995]: I0126 23:42:21.296024 4995 scope.go:117] "RemoveContainer" containerID="1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183" Jan 26 23:42:21 crc kubenswrapper[4995]: I0126 23:42:21.336854 4995 scope.go:117] "RemoveContainer" containerID="75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2" Jan 26 23:43:40 crc kubenswrapper[4995]: I0126 23:43:40.894181 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:43:40 crc kubenswrapper[4995]: I0126 23:43:40.894862 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.931940 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932766 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-content" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932782 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-content" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932799 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932807 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932820 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="gather" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932828 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="gather" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932837 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932845 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932872 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-utilities" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932880 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-utilities" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.933043 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.933070 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.933081 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="gather" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.934411 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.953860 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.975748 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.975837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.976000 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.077585 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.077718 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.077742 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.078159 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.078386 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.107911 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.294792 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.772072 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:43:51 crc kubenswrapper[4995]: I0126 23:43:51.043516 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerStarted","Data":"dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32"} Jan 26 23:43:51 crc kubenswrapper[4995]: I0126 23:43:51.043568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerStarted","Data":"85fb938305699f9b1455e4e1b9ac4f403aa0c5749107fd53e4320f4161c82bc8"} Jan 26 23:43:52 crc kubenswrapper[4995]: I0126 23:43:52.057226 4995 generic.go:334] "Generic (PLEG): container finished" podID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerID="dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32" exitCode=0 Jan 26 23:43:52 crc kubenswrapper[4995]: I0126 23:43:52.057354 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32"} Jan 26 23:43:53 crc kubenswrapper[4995]: I0126 23:43:53.073213 4995 generic.go:334] "Generic (PLEG): container finished" podID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerID="a12caeb671986b2e4e72da6722706674669ff02071d97c0a7fea27ba820a1040" exitCode=0 Jan 26 23:43:53 crc kubenswrapper[4995]: I0126 23:43:53.073327 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"a12caeb671986b2e4e72da6722706674669ff02071d97c0a7fea27ba820a1040"} Jan 26 23:43:54 crc kubenswrapper[4995]: I0126 23:43:54.087880 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerStarted","Data":"844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2"} Jan 26 23:43:54 crc kubenswrapper[4995]: I0126 23:43:54.108249 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8dvg" podStartSLOduration=3.715322095 podStartE2EDuration="5.108223135s" podCreationTimestamp="2026-01-26 23:43:49 +0000 UTC" firstStartedPulling="2026-01-26 23:43:52.060184092 +0000 UTC m=+2136.224891587" lastFinishedPulling="2026-01-26 23:43:53.453085122 +0000 UTC m=+2137.617792627" observedRunningTime="2026-01-26 23:43:54.105919182 +0000 UTC m=+2138.270626667" watchObservedRunningTime="2026-01-26 23:43:54.108223135 +0000 UTC m=+2138.272930610" Jan 26 23:44:00 crc kubenswrapper[4995]: I0126 23:44:00.296167 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:00 crc kubenswrapper[4995]: I0126 23:44:00.296819 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:00 crc kubenswrapper[4995]: I0126 23:44:00.363851 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:01 crc kubenswrapper[4995]: I0126 23:44:01.228783 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:03 crc kubenswrapper[4995]: I0126 23:44:03.920445 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:44:03 crc kubenswrapper[4995]: I0126 23:44:03.921295 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8dvg" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" containerID="cri-o://844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2" gracePeriod=2 Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.187138 4995 generic.go:334] "Generic (PLEG): container finished" podID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerID="844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2" exitCode=0 Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.187183 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2"} Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.421058 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.438072 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.438174 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.438214 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.440475 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities" (OuterVolumeSpecName: "utilities") pod "56abab47-51ea-48a0-a595-6a34b4e0ba6a" (UID: "56abab47-51ea-48a0-a595-6a34b4e0ba6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.445955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps" (OuterVolumeSpecName: "kube-api-access-l9zps") pod "56abab47-51ea-48a0-a595-6a34b4e0ba6a" (UID: "56abab47-51ea-48a0-a595-6a34b4e0ba6a"). InnerVolumeSpecName "kube-api-access-l9zps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.464015 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56abab47-51ea-48a0-a595-6a34b4e0ba6a" (UID: "56abab47-51ea-48a0-a595-6a34b4e0ba6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.539668 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.539928 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.540064 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") on node \"crc\" DevicePath \"\"" Jan 26 23:44:04 crc kubenswrapper[4995]: E0126 23:44:04.696131 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56abab47_51ea_48a0_a595_6a34b4e0ba6a.slice/crio-85fb938305699f9b1455e4e1b9ac4f403aa0c5749107fd53e4320f4161c82bc8\": RecentStats: unable to find data in memory cache]" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.202175 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"85fb938305699f9b1455e4e1b9ac4f403aa0c5749107fd53e4320f4161c82bc8"} Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.202260 4995 scope.go:117] "RemoveContainer" containerID="844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.203412 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.237692 4995 scope.go:117] "RemoveContainer" containerID="a12caeb671986b2e4e72da6722706674669ff02071d97c0a7fea27ba820a1040" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.242406 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.267127 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.284057 4995 scope.go:117] "RemoveContainer" containerID="dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32" Jan 26 23:44:06 crc kubenswrapper[4995]: I0126 23:44:06.537186 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" path="/var/lib/kubelet/pods/56abab47-51ea-48a0-a595-6a34b4e0ba6a/volumes" Jan 26 23:44:10 crc kubenswrapper[4995]: I0126 23:44:10.893448 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:44:10 crc kubenswrapper[4995]: I0126 23:44:10.893831 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.893153 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.893817 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.893883 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.894740 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.894822 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061" gracePeriod=600 Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547172 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061" exitCode=0 Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061"} Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547861 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"d4b45e4d8deb9701b6136e54a5fbffce80682787350892626a74607b53b30960"} Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547891 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.161723 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh"] Jan 26 23:45:00 crc kubenswrapper[4995]: E0126 23:45:00.162563 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-utilities" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162575 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-utilities" Jan 26 23:45:00 crc kubenswrapper[4995]: E0126 23:45:00.162589 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162595 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" Jan 26 23:45:00 crc kubenswrapper[4995]: E0126 23:45:00.162618 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-content" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162625 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-content" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162769 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.163388 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.166049 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.166543 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.215157 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh"] Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.261965 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.262259 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.262294 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.363932 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.363984 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.364010 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.365932 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.373997 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.382799 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.486500 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.914341 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh"] Jan 26 23:45:01 crc kubenswrapper[4995]: I0126 23:45:01.712938 4995 generic.go:334] "Generic (PLEG): container finished" podID="a0bdea8f-8192-42ae-a341-c4db0996136d" containerID="9b845987076a1ade135f1c53c3e851c31499abee1f7c290929b571d63bed551f" exitCode=0 Jan 26 23:45:01 crc kubenswrapper[4995]: I0126 23:45:01.713042 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" event={"ID":"a0bdea8f-8192-42ae-a341-c4db0996136d","Type":"ContainerDied","Data":"9b845987076a1ade135f1c53c3e851c31499abee1f7c290929b571d63bed551f"} Jan 26 23:45:01 crc kubenswrapper[4995]: I0126 23:45:01.714683 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" event={"ID":"a0bdea8f-8192-42ae-a341-c4db0996136d","Type":"ContainerStarted","Data":"3e90300d645223f52dbdc51fd905cab8a8699a961aba3a43cc386be5e80c6d2f"} Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.134739 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.230260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"a0bdea8f-8192-42ae-a341-c4db0996136d\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.230360 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"a0bdea8f-8192-42ae-a341-c4db0996136d\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.230536 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"a0bdea8f-8192-42ae-a341-c4db0996136d\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.231092 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0bdea8f-8192-42ae-a341-c4db0996136d" (UID: "a0bdea8f-8192-42ae-a341-c4db0996136d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.231721 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.235950 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0bdea8f-8192-42ae-a341-c4db0996136d" (UID: "a0bdea8f-8192-42ae-a341-c4db0996136d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.235955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm" (OuterVolumeSpecName: "kube-api-access-rfxcm") pod "a0bdea8f-8192-42ae-a341-c4db0996136d" (UID: "a0bdea8f-8192-42ae-a341-c4db0996136d"). InnerVolumeSpecName "kube-api-access-rfxcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.333842 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") on node \"crc\" DevicePath \"\"" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.333882 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.734722 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" event={"ID":"a0bdea8f-8192-42ae-a341-c4db0996136d","Type":"ContainerDied","Data":"3e90300d645223f52dbdc51fd905cab8a8699a961aba3a43cc386be5e80c6d2f"} Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.734767 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e90300d645223f52dbdc51fd905cab8a8699a961aba3a43cc386be5e80c6d2f" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.734793 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:04 crc kubenswrapper[4995]: I0126 23:45:04.228950 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:45:04 crc kubenswrapper[4995]: I0126 23:45:04.246731 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:45:04 crc kubenswrapper[4995]: I0126 23:45:04.531547 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" path="/var/lib/kubelet/pods/7de4fe23-2da4-47df-a68b-d6d5148ab964/volumes" Jan 26 23:45:21 crc kubenswrapper[4995]: I0126 23:45:21.562751 4995 scope.go:117] "RemoveContainer" containerID="a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135776301024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135776301017372 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135771476016525 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135771476015475 5ustar corecore